Just musing a bit: I can create an IFTTT recipe to create a webhook to target a Micropub endpoint on my website, but it would be cooler if I could directly add a recipe to target the Micropub endpoint directly. I want IFTTT: the micropub client.

cc: Zapier, Integromat, n8n

🔖 Integromat – The glue of the internet

Bookmarked Integromat (Integromat)
Integromat is an easy to use, powerful tool with unique features for automating manual processes. Connect your favorite apps, services and devices with each other without having any programming skills.
Checked into Beyond WordPress – easy WP automation and integration with no coding
Sessions after lunch starting a few minutes late.

Sabrina Liao is looking primarily looking at Zapier, Integromat, IFTTT, automate.io.

I use a huge number of automated pieces like these, particularly IFTTT, for driving my own personal online commonplace book.

📺 Is that a toothpick or a flux capacitor? Oh wait, it’s Google Sheets. | Domains 2019 | Jeff Everhart, Tom Woodward, Matt Roberts

Watched Is that a toothpick or a flux capacitor? Oh wait, it's Google Sheets. by Jeff Everhart, Tom Woodward, Matt Roberts from Domains 2019 | YouTube

Are you looking for low stakes ways to store and display data? Welp, here’s Google Sheets. Do you want to automate all of the boring parts of your job and sip a drink on a beach somewhere? Looks like you owe Google Sheets a beer. Have you ever wanted to build a lightweight full stack application without spinning up an orchestrated Docker container cluster running on AWS using Typescript that has 90% unit test coverage. Well, hold on to your hats, cause Google Sheets is about to hit 88 MPH while keeping your molecular structure intact.

At VCU’s ALT Lab, we’ve used Google Sheets to build educational experiences that range from novel, to complex, to entirely absurd. Brace yourself for temporal displacement and a little but of JavaScript.

There’s some low-level stuff here that could be dovetailed with IFTTT.com to do some simple automation for maybe doing Snarfed’s backfeed problem.

📑 ‘The goal is to automate us’: welcome to the age of surveillance capitalism | John Naughton | The Guardian

Annotated 'The goal is to automate us': welcome to the age of surveillance capitalism by John NaughtonJohn Naughton (the Guardian)
It is no longer enough to automate information flows about us; the goal now is to automate us. These processes are meticulously designed to produce ignorance by circumventing individual awareness and thus eliminate any possibility of self-determination. As one data scientist explained to me, “We can engineer the context around a particular behaviour and force change that way… We are learning how to write the music, and then we let the music make them dance.”  

I’ll agree: Passive Tracking > Active Tracking

It’s always nice if you can provide real-time active tracking and posting on your own website, but is it really necessary? Is it always worthwhile? What value does it provide to you? to others?

The other day I read Eddie Hinkle’s article Passive Tracking > Active Tracking in which he details how he either actively or passively tracks on his own website things he’s listening to or watching. I thought I’d take a moment to scribble out some of my thoughts and process for how and why I do what I’m doing on my own site.

I too track a lot of things relatively passively. Most of it I do for my own “diary” or commonplace book. Typically I’ll start out using silo services that have either RSS feeds or that work with services like IFTTT.com or Zapier. If those don’t exist, I’ll just use the ubiquitous “share” functionality of nearly all web pages and mobile platforms to share the content or page via email which I can use to post to my website as well. The primary minimal data points I’m looking for are the title of the specific thing I’m capturing (the movie, tv show/episode title, book title, article title, podcast title) and the date/time stamp at which the activity was done.

I’ll use these to take input data and transfer it to my own website, typically in draft form. In many cases, these methods collect all the data I want and put it into a format for immediate sharing. Other times I’ll clean up some bits of the data (almost always context related, so things like images, summaries, the source of the data, etc.) a bit before sharing. Then I optionally decide to post it either publicly or privately on my site.

Some of the sources I use for pulling in data (especially for context) to my website include:
 Watches: IMDb.com, Letterboxd, TheTVDB.com, themoviedb.org, direct websites for shows/movies themselves
 Listens: typically using share functionality via email from my podcatcher; Spotify, Last.fm,
 Reads: reading.am, Pocket, Hypothes.is, GoodReads, 
 Bookmarks: diigo, Hypothes.is, Twitter, Pocket

Often, going the route of least resistance for doing this sort of tracking is a useful thing to find out if doing so is ultimately useful or valuable to you. If it’s not, then building some massive edifice and code base for doing so may be additional sunk cost to find out that you don’t find it valuable or fulfilling somehow. This is primary value of the idea “manual until it hurts.”

I will note that though I do have the ability to do quick posting to my site using bookmarklets in conjunction with the Post Kinds Plugin for WordPress, more often than not, I find that interrupting my personal life and those around me to post this way seems a bit rude. For things like listen posts, logging them actively could a be a life threatening endeavor because I most often listen while driving. Thus I prefer to take a moment or two to more subtly mark what I want to post and then handle the rest at a more quiet and convenient time. I’ll use down time while passively watching television or listening to music to do this sort of clean up. Often, particularly for bookmarks and annotations, this also forces me to have a second bite at the proverbial apple to either follow up on the bookmarked idea or think about and reflect on the thing I’ve saved. In some sense this follow up is way more valuable to me than having actively posted it and then simply moving on. It also becomes a way for what might otherwise be considered “digital exhaust” to give me some additional value.

Eventually having better active ways to track and post these things in real time would be nice, but the marginal additional value just hasn’t seemed to be there for me. If it were, there are also larger hurdles of doing these posts quickly and in a way that pulls in the context portions I’d like to present. Adding context also generally means having solid pre-existing data bases of information from which to poll from, and often these can be difficult to come by or require API access to something. As a result services like Swarm and OwnYourSwarm are useful as they can not only speed up the process of logging data, but they are underpinned with relatively solid databases. As an example, I frequently can’t use IMDB.com to log in television shows like Meet the Press or Face the Nation because entries and data for those particular episodes often don’t exist even when I’m watching them several hours after they’ve aired. And even in these cases the websites for these shows often don’t yet have photos, synopses, video, or transcripts posted when I’m watching them. Thus posting for these in real-time the way I’d like becomes a much more difficult nightmare and requires a lot more manual effort.

Update:

As a follow up to Eddie’s post (which doesn’t yet show the Webmention), I’ll also point out that Jonathan has an excellent description and some code for what he’s doing on his site as well.

I’m not sure what’s changed recently but I went from a few spam comments on my website every week to hundreds a day.

Now that I have a larger and more diverse set of post types, it’s become more entertaining to read the incongruous spam posts about how brilliant and insightful I am when I’ve just posted a simple checkin.

I thought my writing style on this checkin was great myself, but I didn’t expect to gain this type of acclaim.

Spammers are going to have to start using microformats parsers to know which post types to properly add their spam to in the future.

🎧 Containers Episode 8: Robots, Piers Full of Robots

Listened to Containers Episode 8: Robots, Piers Full of Robots from Containers
In the conclusion of this series, we peer into the future of human-robot combinations on the waterfront and in the rest of the supply chain. We’ll hear about the strange future of cyborg trucking and meet the friendly little helper bots in warehouses. The view of automation that sees only a battle between robots vs. humans is wrong. It’s humans all the way down.

The key to replacing jobs lost to robots and automation is going to be much more education, and we’re doing a painfully poor job of it. This episode is a bit more upbeat about the technology side as well as the human side of things. It’s fine to do the one, but it does a disservice to the other without the added complexities of the problems.

In sum, this was a great series of episodes that shows a lot of what the average person is missing about how global trade happens and how intricate it can be. It’s impressive how much ground can be covered in just a few short episodes. I recommend the entire series to everyone.

https://soundcloud.com/containersfmg/episode-8-robots-piers-full-of-robots