User Interface to Indicate Posting Activity

In addition to the sparkline graphs I’ve got in the sidebar of my website, I’ve recently been looking at alternate ways to indicate the posting activity on my own website.

An example of a sparkline graph on Boffosocko.com. A blue line indicates the comment posting velocity and an orange line indicates the comment velocity.
“Monthly activity over 5 years” for both posting activity as well as commenting activity on my website.

Calendar Heatmaps

Yesterday I was contemplating calendar heatmaps which are probably best known from the user interface of GitHub which relatively shows how active someone is on the website. I’ve discovered that JetPack for WordPress provides a similar functionality on the back end (in blue instead of green), but sadly doesn’t make it available for display on the front end of websites. I’ve filed a feature request to see if it’s something they’d work on in the future, so if having something like this seems useful to you, please click through and give the post a +1.

Orderly grid of squares representing dates which are grouped by month with a gradation of colors on each square that indicate in heat map fashion how frequently I post to my website.
A screen capture of what my posting “velocity” looks like on the back end of my website. The darkest squares indicate 30+ posts in a day while the next darkest indicate between 15-30 posts. My “streak” is far longer than this chart indicates. I obviously post a LOT.

Circular Widthmaps

Today I saw a note that led me to the Internet Archive which I know has recently had a redesign. I’m not sure if the functionality I saw was part of this redesign, but it’s pretty awesome. I’m not sure quite what to call this sort of circular bar chart given what it does, but circular widthmap seems vaguely appropriate. Here’s a link to the archive.org page for my website that shows this cool UI, screencaptures of which also appear below: http://web.archive.org/web/sitemap/https://www.boffosocko.com/

Instead of using color gradations to indicate a relative number of posts, the UI is measuring things via width in ever increasing concentric circles. The innermost circle indicates the root domain and successive levels out add additional paths from my site. Because I’m using dated archive paths, there’s a level of circle by year (2019, 2018, 2017, etc.) then another level outside that by months (April 2019, March 2019, etc.), and finally the outermost circle which indicates individual posts. As a result, the width of a particular year or month indicates relatively how active that time frame was on my website (or at least how active Archive.org thinks it was based on its robot crawler.)

Of course the segments on the circles also measure things like categories and tags on my site as well along with the date based archives. Thus I can gauge how often I use particular categories for example.

I’ll also note that in the 2018 portion of the circle for July 11th, I had a post that slashdotted my website when it took off on Hacker News. That individual day is represented as really wide on that circular ring because it has an additional concentric circle outside of it that represents the hundreds of comment URL fragments for that post. So one must keep in mind that things in some of the internal rings aren’t as relative because they may be heavily affected by portions of content further out on the ring.

Interface that presents concentric circles with archived links of a website. The center circle is the domain itself while outside portions of the circle include archive pages, categories, pages, posts, and other portions of a site.
My website posting activity (and a little more) from 2018 and before according to the Internet Archive.
Interface that presents concentric circles with archived links of a website. The center circle is the domain itself while outside portions of the circle include archive pages, categories, pages, posts, and other portions of a site.
My website posting activity (and a little more) from April 2019 and before according to the Internet Archive.

How awesome would it be if this were embed-able and usable on my own website?

👓 Chris Aldrich’s Year In Pocket

Read My Year in Pocket (Pocket App)
See how much I read in Pocket this year!

According to Pocket’s account I read 766,000 words or the equivalent of about 10 books. My most saved topics were current events, science, technology, health, and education.

The most popular things I apparently saved this year:

I’ll have to work at getting better to create my own end-of-year statistics since my own website has a better accounting of what I’ve actually read (it isn’t all public) and bookmarked. I do like that their service does some aggregate comparison of my data versus all the other user data (anonymized from my perspective).

Pocket also does a relatively good job of doing discovery of good things to read based on aggregate user data in terms of categories like “Best of” and “Popular”. They also give me weekly email updates of things I’ve bookmarked there as reminders to go back and read them, which I find a useful functionality which they haven’t over-gamified. Presently my own closest functionality to this is to be subscribed to the RSS feed of my own public bookmarks in a feed reader (which I find generally useful) as well as regularly checking on my private bookmarks on my websites’s back end (something as easy as clicking on a browser bookmark) and even looking at my “on this day” functionality to review over things from years past.

I’ll note that I currently rely more on Nuzzle for real-time discovery  on a daily basis however.

Greg McVerry might appreciate that they’re gamifying reading by presenting me with a badge.

As an aside while I’m thinking of it, it might be a cool thing if the IndieWeb wiki received webmentions, so that self-documentation I do on my own website automatically appeared on the appropriate linked pages either in a webmention section or perhaps the “See Also” section. If wikis did this generally, it would be a cool means of potentially building communities and fuelling discovery on the broader web. Imagine if adding to a wiki via Webmention were as easy as syndicating content to a site like IndieNews or IndieWeb.XYZ? It could also function as a useful method of archiving web content from original pages to places like the Internet Archive in a simple way, much like how I currently auto-archive my individual pages automatically on the day they’re published.

👓 Why You Should Never, Ever Use Quora | Waxy.org

Read Why You Should Never, Ever Use Quora by Andy Baio (Waxy.org)
Yesterday, Quora announced that 100 million user accounts were compromised, including private activity like downvotes and direct messages, by a “malicious third party.” Data breaches are a frustrating part of the lifecycle of every online service — as they grow in popularity, they become a big...

Amen

👓 Big Changes Ahead for FMA | Free Music Archive

Read Big Changes Ahead for FMA by cheyenne_h (FMA Admin) (Free Music Archive)
We regret to inform you that due to a funding shortage, the FMA will be closing down later this month. The future of the archive is uncertain, but we have done everything we can to ensure that our files will not disappear from the web forever. The full audio collection will be backed up and available at https://archive.org/details/freemusicarchive (some of the collection is already there; feel free to go browse).

Internet related archives are important but fragile things. It’s sad to see when archives like this go down, particularly due to funding reasons.

👓 The Internet’s keepers? “Some call us hoarders—I like to say we’re archivists” | ArsTechnica

Read The Internet’s keepers? “Some call us hoarders—I like to say we’re archivists” (Ars Technica)
Wayback Machine Director Mark Graham outlines the scale of everyone's favorite archive.

On the topic of RSS audio feeds for The Gillmor Gang

Some suggestions for extracting audio only podcast-friendly feeds for one of my favorite shows.

I’ll start off with the fact that I’m a big fan of The Gillmore Gang and recommend it to anyone who is interested in the very bleeding edge of the overlap of technology and media. I’ve been listening almost since the beginning, and feel that digging back into their archives is a fantastic learning experience even for the well-informed. Most older episodes stand up well to the test of time.

The Problem

In the Doc Soup episode of The Gillmor Gang on 5/13/17–right at the very end–Steve Gillmor reiterated, “This isn’t a podcast. This was a podcast. It will always be a podcast, but streaming is where it’s at, and that’s what we’re doing right now.” As such, apparently Tech Crunch (or Steve for that matter) doesn’t think it’s worthwhile to have any sort of subscribe-able feed for those who prefer to listen to a time shifted version of the show. (Ironically in nearly every other episode they talk about the brilliance of the Apple TV, which is–guess what?–a highly dedicated time shifting viewing/listening device.) I suppose that their use of an old, but modified TV test pattern hiding in the og:image metadata on their webpages is all-too-apropos.

It’s been several years (around the time of the Leo Incident?) since The Gillmor Gang has reliably published an audio version, a fact I find painful and frustrating as I’m sure many others do as well. At least once or twice a year, I spend an hour or so searching around to find one, generally to no avail. While watching it live and participating in the live chat may be nice, I typically can’t manage the time slot, so I’m stuck trying to find time to watch the video versions on Tech Crunch. Sadly, looking at four or more old, wrinkly, white men (Steve himself has cautioned, “cover your eyes, it’ll be okay…” without admitting it could certainly use some diversity) for an hour or more isn’t my bailiwick. Having video as the primary modality for this show is rarely useful. To me, it’s the ideas within the discussion which are worthwhile, so I only need a much lower bandwidth .mp3 audio file to be able to listen. And so sadly, the one thing this over-technologized show (thanks again TriCaster!) actually needs from a production perspective is a simple .mp3 (RSS, Atom, JSON feed, or h-feed) podcast feed!

Solutions

In recent batches of searching, I have come across a few useful resources for those who want simple, sweet audio out of the show, so I’m going to document them here.

First, some benevolent soul has been archiving audio copies of the show to The Internet Archive for a while. They can be found here (sorted by upload date): https://archive.org/search.php?query=subject%3A%22Gillmor+Gang%22&sort=-publicdate

In addition to this, one might also use other search methods, but this should give one most of the needed weekly content. Sadly IA doesn’t provide a useful feed out…

To create a feed quickly, one can create a free Huffduffer account. (This is one of my favorite tools in the world by the way.) They’ve got a useful bookmarklet tool that allows you to visit pages and save audio files and metadata about them to your account. Further, they provide multiple immediate means of subscribing to your saves as feeds! Thus you can pick and choose which Gillmor Gang episodes (or any other audio files on the web for that matter) you’d like to put into your feed. Then subscribe in your favorite podcatcher and go.

For those who’d like to skip a step, Huffduffer also provides iTunes and a variety of other podcatcher specific feeds for content aggregated in other people’s accounts or even via tags on the service. (You can subscribe to what your friends are listening to!) Thus you can search for Gillmor Gang and BOOM! There are quick and easy links right there in the sidebar for you to subscribe to your heart’s content! (Caveat: you might have to filter out a few duplicates or some unrelated content, but this is the small price you’ll pay for huge convenience.)

My last potential suggestion might be useful to some, but is (currently) so time-delayed it’s likely not as useful. For a while, I’ve been making “Listen” posts to my website of things I listen to around the web. I’ve discovered that the way I do it, which involves transcluding the original audio files so the original host sees and gets the traffic, provides a subscribe-able faux-cast of content. You can use this RSS feed to capture the episodes I’ve been listening to lately. Note that I’m way behind right now and don’t always listen to episodes in chronological order, so it’s not as reliable a method for the more avid fan. Of course now that I’ve got some reasonable solutions… I’ll likely catch up quickly and we’re off to the races again.

Naturally none of this chicanery would be necessary if the group of producers and editors of the show would take five minutes to create and host their own version. Apparently they have the freedom and flexibility to not have to worry about clicks and advertising (which I completely appreciate, by the way) to need to capture the other half of the audience they’re surely missing by not offering an easy-to-find audio feed. But I’m dead certain they’ve got the time, ability, and resources to easily do this, which makes it painful to see that they don’t. Perhaps one day they will, but I wouldn’t bet the house on it.

I’ve made requests and been holding my breath for years, but the best I’ve done so far is to turn blue and fall off my chair.

👓 Who Owns L.A. Weekly? | L.A. Weekly

Read Who Owns L.A. Weekly? by Keith Plocek (L.A. Weekly)
Who owns the publication you’re reading right now? It’s a question you should ask no matter what you’re reading. In Latin there’s a phrase cui bono, which roughly translates as “who is benefiting?” It’s a good idea to know who is profiting in any situation. Why? So you can make educated decisions.

If things are as potentially as nefarious as they sound here, I’m archiving a copy of this article to the Internet Archive now, just in case the new owners notice and it disappears.

Reply to Pingbacks: hiding in plain sight by Ian Guest

Replied to Pingbacks: hiding in plain sight by Ian Guest (Marginal Notes)
Wait! Aren’t you researching Twitter? I am indeed and the preceding discussion has largely centred on pingbacks, a feature of blogs, rather than microblogs. I have two points to make here: firstly that microblogs and Twitter may have features which function in a similar way to pingbacks. The retweet for example provides a similar link to a text or resource that someone else has produced. I’ll admit that it has less permanence than a pingback, patiently ensconced at the foot of a blog and ready to whisk the reader off to the linked blog, but then the structure and function of Twitter is one of flow and change when compared with a blog; it’s a different beast. The second is that my point of entry to the blogs and their interconnected web of enabling pingbacks was a tweet. Two actually. Andrea’s tweet took me to another tweet which referenced Aditi’s blog post; had I not been on Twitter and had Andrea and I not made a connection through that platform, the likelihood of me ever being aware of Aditi’s post and the learning opportunities that it and its wider assemblage brings together would be minimal.

I’m finding your short study and thoughts on pingbacks while I was thinking about Webmentions (and a particular issue that Aaron Davis was having with them) after having spent a chunk of the day remotely following the Dodging the Memory Hole 2017 conference at the Internet Archive in San Francisco.

It’s made me realize that one of the bigger values of the iteration that Webmentions has over its predecessor pingbacks and trackbacks is that at least a snapshot of the content has captured on the receiving site. As you’ve noted that while the receiving site has the scant data from the pingback, there’s not much to look at in general and even less when the sending site has disappeared from the web. In the case of Webmentions, even if the sending site has disappeared from the web, the receiving site can still potentially display more of that missing content if it wishes. Within the WordPress ecosystem simple mentions only show the indication that the article was mentioned, but hiding within the actual database on the back end is a copy of the post itself. With a few quick changes to make the “mention” into a “reply” the content of the original post can be quickly uncovered/recovered. (I do wonder a bit if you cross-referenced the Internet Archive or other sources in your search to attempt to recover those lost links.)

I will admit that I recall the Webmention spec allowing a site to modify and/or update its replies/webmentions, but in practice I’m not sure how many sites actually implement this functionality, so from an archiveal standpoint it’s probably pretty solid/stable at the moment.

Separately, I also find myself looking at your small example and how you’ve expanded it out a level or two within your network to see how it spread. This reminds me of Ryan Barrrett’s work from earlier this year on the IndieWeb network in creating the Indie Map tool which he used to show the interconnections between over three thousand people (or their websites) using links like Webmentions. Depending on your broader study, it might make an interesting example to look at and/or perhaps some code to extend?

With particular regard to your paragraph under “Wait! Aren’t you researching Twitter?” I thought I’d point you to a hybrid approach of melding some of Twitter and older/traditional blogs together. I personally post everything to my own website first and syndicate it to Twitter and then backfeed all of the replies, comments, and reactions via Brid.gy using webmentions. While there aren’t a lot of users on the internet doing something like this at the moment, it may provide a very different microcosm for you to take a look at. I’ve even patched together a means to allow people to @mention me on Twitter that sends the data to my personal website as a means of communication.

After a bit of poking around, I was also glad to find a fellow netizen who is also consciously using their website as a commonplace book of sorts.

👓 Books from 1923 to 1941 Now Liberated! | Archive.org

Read Books from 1923 to 1941 Now Liberated! (Internet Archive Blogs)
The Internet Archive is now leveraging a little known, and perhaps never used, provision of US copyright law, Section 108h, which allows libraries to scan and make available materials published 1923 to 1941 if they are not being actively sold. Elizabeth Townsend Gard, a copyright scholar at Tulane University calls this “Library Public Domain.” She and her students helped bring the first scanned books of this era available online in a collection named for the author of the bill making this necessary: The Sonny Bono Memorial Collection. Thousands more books will be added in the near future as we automate. We hope this will encourage libraries that have been reticent to scan beyond 1923 to start mass scanning their books and other works, at least up to 1942.

Dodging the Memory Hole 2017 Conference at the Internet Archive November 15-16, 2017

RSVPed Interested in Attending https://www.rjionline.org/events/dodging-the-memory-hole-2017
Please join us at Dodging the Memory Hole 2017: Saving Online News on Nov. 15-16 at the Internet Archive headquarters in San Francisco. Speakers, panelists and attendees will explore solutions to the most urgent threat to cultural memory today — the loss of online news content. The forum will focus on progress made in and successful models of long-term preservation of born-digital news content. Journalistic content published on websites and through social media channels is ephemeral and easily lost in a tsunami of digital content. Join professional journalists, librarians, archivists, technologists and entrepreneurs in addressing the urgent need to save the first rough draft of history in digital form. The two-day forum — funded by the Donald W. Reynolds Journalism Institute and an Institute of Museum and Library Services grant awarded to the Journalism Digital News Archive, UCLA Library and the Educopia Institute — will feature thought leaders, stakeholders and digital preservation practitioners who are passionate about preserving born-digital news. Sessions will include speakers, multi-member panels, lightning round speakers and poster presenters examining existing initiatives and novel practices for protecting and preserving online journalism.

I attended this conference at UCLA in Fall 2016; it was fantastic! I highly recommend it to journalists, coders, Indieweb enthusiasts, publishers, and others interested in the related topics covered.

This has to be the most awesome Indieweb pull request I’ve seen this year.

This has to be the most awesome Indieweb pull request I've seen this year.

WithKnown is a fantastic, free, and opensource content management service that supports some of the most bleeding edge technology on the internet. I’ve been playing with it for over two years and love it!

And today, there’s another reason to love it even more…

This is also a great reminder that developers can have a lasting and useful impact on the world around them–even in the political arena.