Acquired Breaker – podcast listening & discovery (Breaker)
Listen to the best new podcast episodes with Breaker. Follow your friends to see what they’re listening to, and discover new shows that you’ll love. Like, share, and comment on your favorite episodes. Join the Breaker community and listen to the best stuff!
Downloaded a copy of this to my Android Phone to test out the functionality. Not much hope for using it as a daily driver, but I’m curious how they handle sharing URLs and if it could be used in creating listen posts.

Reminds me that I should throw a few podcast feeds into Indigenous for Android to see how it might work as a podcatcher and whether it would make a good Listen post micropub client.

IndieWeb.org 2020/Austin/fromflowtostock Session () (#)

Simultaneously saving journalism and social media

As I’m reading the notes from the New Affordability session at IndieWebCamp Austin I can’t help but to think back on my old IndieWeb business hosting idea which I’ve been meaning to flesh out more fully.

What if local newspapers/magazines or other traditional local publishers ran/operated/maintained IndieWeb platforms or hubs (similar to micro.blog, Multi-site WordPress installs, or Mastodon instances) to not only publish, aggregate, curate, and disseminate their local area news, but also provided that social media service for their customers?

Reasonable mass hosting can be done for about $2/month which could be bundled in with regular subscription prices of newspapers. This would solve some of the problems that people face with social media presences on services like Facebook and Twitter while simultaneously solving the problem of newspapers and journalistic enterprises owning and managing their own distribution. It would also give a tighter coupling between journalistic enterprises and the communities they serve.

The decentralization of the process here could also serve to prevent the much larger attack surfaces that global systems like Twitter and Facebook represent from being disinformation targets for hostile governments or hate groups. Tighter community involvement could be a side benefit for local discovery, aggregation, and interaction.

Many journalistic groups are already building and/or maintaining their own websites, why not go a half-step further. Additionally many large newspaper conglomerates have recently been building their own custom CMS platforms not only for their own work, but also to sell to other smaller news organizations that may not have the time or technical expertise to manage them.

 A similar idea is that of local government doing this sort of building/hosting and Greg McVerry and I have discussed this being done by local libraries. While this is a laudable idea, I think that the alignment of benefits between customers and newspapers as well as the potential competition put into place could be a bigger beneficial benefit to all sides.

Featured photo by AbsolutVision on Unsplash

Read Can This Even Be Called Music? by Kicks Condor (Kicks Condor)
When I first discovered this link, it seemed that the music became more and more unlistenable as I scrolled down the page. Now that I’ve had time to listen to CTEBCM further, there is actually quite a bit of tame music here that is just strangely genred. Such as ‘the loser’, a solo opera based on the wonderful Thomas Bernhard novel of the same name which feels reminiscent of the meandering ‘Shia LeBeouf’ storysong. Or the sometimes-metal, sometimes-harpsichord of Spine Reader’s ‘Recorded Instruments’.
Shia!
Liked Exploring Feed Discovery and Markup by David ShanskeDavid Shanske (david.shanske.com)
The issue of finding feeds to subscribe is a challenge that I have explored in my attempts to implement code in support of the Yarns Microsub Server. I want to publish feeds in a way that others can find them, not just users, but automated systems that present them to users. So, let’s start with t...
Great start on outlining the problem. I’ll need to come back to it again and look at some potential examples to form a better opinion. I’m curious what examples may be unearthed by some of your questions.

Extra Feeds Plugin for WordPress

David Shanske has built a simple new IndieWeb friendly plugin for WordPress.

For individual posts, the Extra Feeds plugin will add code into the <header> of one’s page to provide feed readers that have built-in discovery mechanisms the ability to find the additional feeds provided by WordPress for all the tags, categories, and other custom taxonomies that appear on any given page. 

Without the plugin, WordPress core will generally only provide the main feed for your site and that of your comments feed. This is fine for sites that only post a few times a day or even per week. If you’re owning more of the content you post online on your own website as part of the IndieWeb or Domain of One’s Own movements, you’ll likely want more control for the benefit of your readers.

In reality WordPress provides feeds for every tag, category, or custom taxonomy that appears on your site, it just doesn’t advertise them to feed readers or other machines unless you add them manually or via custom code or a plugin. Having this as an option can be helpful when you’re publishing dozens of posts a day and your potential readers may only want a subsection of your posting output.

In my case I have a handful of taxonomies that post hundreds to thousands of items per year, so it’s more likely someone may want a subsection of my content rather than my firehose. In fact, I just ran across a statistician yesterday who was following just my math and information theory/biology related posts. With over 7,000 individual taxonomy entries on my site you’ve got a lot of choice, so happy hunting and reading!

This plugin also includes feeds for Post Formats, Post Kinds (if you have that plugin installed), and the author feed for sites with one or more different authors.

This is useful in that now while you’re on any particular page and want to subscribe to something on that specific page, it will be much easier to find those feeds, which have always been there, but are just not easily uncovered by many feed reader work flows because they weren’t explicitly declared.

Some examples from a recent listen post on my site now let you more easily find and subscribe to:

  • my faux-cast:
    <link rel="alternate" type="application/rss+xml" title="Chris Aldrich &raquo; listen Kind Feed" href="https://boffosocko.com/kind/listen/feed/rss/" />
  • the feed of items tagged with Econ Extra Credit, which I’m using to track my progress in Marketplace’s virtual book club:
    <link rel="alternate" type="application/rss+xml" title="Chris Aldrich &raquo; Econ Extra Credit Tag Feed" href="https://boffosocko.com/tag/econ-extra-credit/feed/rss/" />
  • the feed for all posts by an author:
    <link rel="alternate" type="application/rss+xml" title="Chris Aldrich &raquo; Posts by Chris Aldrich Feed" href="https://boffosocko.com/author/chrisaldrich/feed/rss/" />
Read Feed readers/content aggregators by Dan MacKinlay (danmackinlay.name)
Upon the efficient consumption and summarizing of news from around the world.

Facebook is informative in the same way that thumb sucking is nourishing.

Annotated on February 09, 2020 at 10:28AM

Upon the efficient consumption and summarizing of news from around the world.
Remember? from when we though the internet would provide us timely, pertinent information from around the world?
How do we find internet information in a timely fashion?
I have been told to do this through Twitter or Facebook, but, seriously… no. Those are systems designed to waste time with stupid distractions in order to benefit someone else. Facebook is informative in the same way that thumb sucking is nourishing. Telling me to use someone’s social website to gain information is like telling me to play poker machines to fix my financial troubles. Stop that.

Annotated on February 09, 2020 at 10:40AM

Read The discovery metadata field by Matt Maldre (Matt Maldre)
The internet would be a really interesting place if every article that was shared automatically had a “via link.” Ok, so the internet is already interesting. But what makes the internet such a great place is its connectivity. Everything is linked together. We can easily share a link to an article. So many links all … The discovery metadata field Read More »
I’ve been fascinated with this idea of vias, hat tips, and linking credit (a la the defunct Curator’s Code) just like Jeremy Cherfas. I have a custom field in my site for collecting these details sometimes, but I should get around to automating it and showing it on my pages rather than doing it manually.

Links like these seem like throwaways, but they can have a huge amount of value in aggregate. As an example, if I provided the source of how I found this article, then it’s likely that my friend Matt would then be able to see a potential treasure trove of information about the exact same topic which he’s sure to have a lot of interest in as well.

One of the things I love about webmentions is that these sorts of links to give credit could be used to create bi-directional links between sites as well. I’m half-tempted to start using custom experimental microformats classes on these links so that when the idea takes off that people could potentially display them in their comments sections as such instead of just vanilla “mentions”. This could be useful for sites that serve as inspiration in much the same way that journalistic outlets might display reads (versus bookmarks, likes, or reposts) or podcasts could display listens. Just imagine the power that displaying webmentions on wikis could have for their editors to later update pages or readers might have to delve into further resources that mention and link to those pages, especially when the content on those linked pages extends the ideas?

Tim Berners-Lee’s original proposal for hypertext was rejected because it didn’t bake bi-directional links into the web (c.f. Webstock ‘18: Jeremy Keith – Taking Back The Web at 13:39 into the video). Webmentions seems to be a simple way of ensconcing them after-the-fact, but in a way that makes them more resilient as well as update-able and even delete-able  by either side.

Of course now I come to wonder just how it was that Jeremy Cherfas finds such a deep link on Matt’s site from over a year ago? 😉

Jeremy Cherfasupdate on the IndieWeb wiki ᔥ the IndieWeb-meta chat ()

Replied to a tweet by Mathew IngramMathew Ingram (Twitter)
Discovery can definitely be a bear. Interestingly I came to your tweet through a handful of related blogposts via a feedreader from a random OPML file, so apologies for the late reply.

I keep an old school blogroll, but it got so big I made it an entire page. It’s split out by a few broad categories, but there are OPML linked files by category at the bottom to let you follow it all or pick your poisons. Hopefully you’ll find some fun and interesting gems hiding in there.

You might find some interesting feeds by clicking around within Dave Winer’s http://feedbase.io/ which will uncover some interesting active feeds. Best yet, it has lots of OPML files everywhere so you can quickly follow a lot.

Matthias Ott’s post Into the Personal-Website-Verse was at the top of Hacker News earlier this week. Both his post and the HN post have lists of people with websites that could be interesting and useful to follow for voices on the web.

You also might take a look at some of the details and resources on the discovery, blogroll, and even webring pages within the IndieWeb wiki. Not to be missed is Kicks Condor’s hrefhunt. Andy Bell also had a project to highlight personalsit.es.

In a somewhat related question, but from the other perspective (especially for journalism), I’m curious if you have any thoughts on: How to follow the complete output of journalists and other writers?

 

 

Read Feeds for journalists (leibniz.me)
This year started with a small project I really like: Feeds for Journalists, by Dave Winer. The idea is that RSS is still a valid technology to get an effective and unbiased flow of news. As he puts it, after reading a tweet by Mathew Ingram: If you’re a journalist a...
Found this while sifting through some OPML files.
Read A tool for keeping up with local news and events (Richmond Matters)
The short version of this post is that if you’re someone like me who enjoys keeping up with Richmond and Wayne County local news and events (and maybe you’re a little tired of the way social media filters what you are and aren’t seeing), you can: visit 47374.info to see the latest info coming in …

One thing that using this tool has highlighted for me is that there are a lot of things happening in our community every day, between news, announcements, events and other stuff. If you only rely on what your social media service of choice has decided is worth knowing because it’s generating clicks or discussion, you’re likely to miss something important. Also, do you really want to get your news crammed in between cat videos and political rants from distant acquaintances?

Annotated on January 17, 2020 at 01:20PM

How to follow the complete output of journalists and other writers?

In a digital era with a seemingly ever-decreasing number of larger news outlets paying journalists and other writers for their work, the number of working writers who find themselves working for one or more outlets is rapidly increasing. 

This is sure to leave journalists wondering how to better serve their own personal brand either when they leave a major publication for which they’ve long held an association (examples: Walt Mossberg leaving The New York Times or Leon Wieseltier leaving The New Republic)  or alternatively when they’re just starting out and writing for fifty publications and attempting to build a bigger personal following for their work which appears in many locations (examples include nearly everyone out there).

Increasingly I find myself doing insane things to try to follow the content of writers I love. The required gymnastics are increasingly complex to try to track writers across hundreds of different outlets and dozens of social media sites and other platforms (filtering out unwanted results is particularly irksome). One might think that in our current digital media society, it would be easy to find all the writing output of a professional writer like Ta-nehisi Coates, for example, in one centralized place.

I’m also far from the only one. In fact, I recently came across this note by Kevin:

I wish there was a way to subscribe to writers the same way you can use RSS. Obviously twitter gets you the closest, but usually a whole lot more than just the articles they’ve written. It would be awesome if every time Danny Chau or Wesley Morris published a piece I’d know.

The subsequent conversation in his comments or  on Micro.blog (a fairly digital savvy crowd) was less than heartening for further ideas.

As Kevin intimates, most writers and journalists are on Twitter because that’s where a lot of the attention is. But sadly Twitter can be a caustic and toxic place for many. It also means sifting through a lot of intermediary tweets to get to the few a week that are the actual work product articles that one wants to read. This also presumes that one’s favorite writer is on Twitter, still using Twitter, or hasn’t left because they feel it’s a time suck or because of abuse, threats, or other issues (examples: Ta-Nehisi Coates, Lindy West, Sherman Alexie). 

What does the universe of potential solutions for this problem currently look like?

Potential Solutions

Aggregators

One might think that an aggregation platform like Muck Rack which is trying to get journalists to use their service and touts itself as “The easiest, unlimited way to build your portfolio, grow your following and quantify your impact—for free” might provide journalists the ability to easily import their content via RSS feeds and then provide those same feeds back out so that their readers/fans could subscribe to them easily. How exactly are they delivering on that promise to writers to “grow your following”?!

An illustrative example I’ve found on Muck Rack is Ryan O’Hanlon, a Los Angeles-based writer, who writes for  a variety of outlets including The Guardian, The New York Times, ESPN, BuzzFeed, ESPN Deportes, Salon, ESPN Brasil, FiveThirtyEight, The Ringer, and others. As of today they’ve got 410 of his articles archived and linked there. Sadly, there’s no way for a fan of his work to follow him there. Even if the site provided an RSS feed of titles and synopses that forced one to read his work on the original outlet, that would be a big win for readers, for Ryan, and for the outlets he’s writing for–not to mention a big win for Muck Rack and their promise.

I’m sure there have to be a dozen or so other aggregation sites like Muck Rack hiding out there doing something similar, but I’ve yet to find the real tool for which I’m looking. And if that tool exists, it’s poorly distributed and unlikely to help me for 80% of the writers I’m interested in following much less 5%.

Author Controlled Websites

Possibly the best choice for everyone involved would be for writers to have their own websites where they archive their own written work and provide a centralized portfolio for their fans and readers to follow them regardless of where they go or which outlet they’re writing for. They could keep their full pieces privately on the back end, but give titles, names of outlets, photos, and synopses on their sites with links back to the original as traditional blog posts. This pushes the eyeballs towards the outlets that are paying their bills while still allowing their fans to easily follow everything they’re writing. Best of all the writer could own and control it all from soup to nuts.

If I were a journalist doing this on the cheap and didn’t want it to become a timesuck, I’d probably spin up a simple WordPress website and use the excellent and well-documented PressForward project/plugin to completely archive and aggregate my published work, but use their awesome forwarding functionality so that those visiting the URLs of the individual pieces would be automatically redirected to the original outlet. This is a great benefit for writers many of whom know the pain of having written for outlets that have gone out of business, been bought out, or even completely disappeared from the web. 

Of course, from a website, it’s relatively easy to automatically cross-post your work to any number of other social platforms to notify the masses if necessary, but at least there is one canonical and centralized place to find a writer’s proverbial “meat and potatoes”. If you’re not doing something like this at a minimum, you’re just making it hard for your fans and failing at the very basics of building your own brand, which in part is to get even more readers. (Hint, the more readers and fans you’ve got, the more eyeballs you bring to the outlets you’re writing for, and in a market economy built on clicks, more eyeballs means more traffic, which means more money in the writer’s pocket. Since a portion of the web traffic would be going through an author’s website, they’ll have at least a proportional idea of how many eyeballs they’re pushing.)

I can’t help but point out that even some who have set up their own websites aren’t quite doing any of this right or even well. We can look back at Ryan O’Hanlon above with a website at https://www.ryanwohanlon.com/. Sadly he’s obviously let the domain registration lapse, and it has been taken over by a company selling shoes. We can compare this with the slight step up that Mssr. Coates has made by not only owning his own domain and having an informative website featuring his books, but alas there’s not even a link to his work for The Atlantic or any other writing anywhere else. Devastatingly his RSS feed isn’t linked, but if you manage to find it on his website, you’ll be less-than-enthralled by three posts of Lorem ipsum from 2017. Ugh! What has the world devolved to? (I can only suspect that his website is run by his publisher who cares about the book revenue and can’t be bothered to update his homepage with events that are now long past.)

Examples of some journalists/writers who are doing some interesting work, experimentation, or making an effort in this area include: Richard MacManus,  Marina Gerner, Dan Gillmor, Jay RosenBill Bennett, Jeff JarvisAram Zucker-Scharff, and Tim Harford

One of my favorite examples is John Naughton who writes a regular column for the Guardian. He has his own site where he posts links, quotes, what he’s reading, his commentary, and quotes of his long form writing elsewhere along with links to full pieces on those sites. I have no problem following some or all of his output there since his (WordPress-based) site has individual feeds for either small portions or all of it. (I’ve also written a short case study on Ms. Gerner’s site in the past as well.)

Newsletters

Before anyone says, “What about their newsletters?” I’ll admit that both O’Hanlon and Coates both have newsletters, but what’s to guarantee that they’re doing a better job of pushing all of their content though those outlets? Most of my experience with newsletters would indicate that’s definitely not the case with most writers, and again, not all writers are going to have newsletters, which seem to be the flavor-of-the month in terms of media distribution. What are we to do when newsletters are passé in 6 months? (If you don’t believe me, just recall the parable of all the magazines and writers that moved from their own websites or Tumblr to Medium.com.)

Tangential projects

I’m aware of some one-off tools that come close to the sort of notifications of writers’ work that might be leveraged or modified into a bigger tool or stand alone platform. Still, most of these are simple uni-taskers and only fix small portions of the overall problem.

Extra Extra

Savemy.News

Ben Walsh of the Los Angeles Times Data Desk has created a simple web interface at www.SaveMy.News that journalists can use to quickly archive their stories to the Internet Archive and WebCite. One can log into the service via Twitter and later download a .csv file with a running list of all their works with links to the archived copies. Adding on some functionality to add feeds and make them discoverable to a tool like this could be a boon.

Granary

Ryan Barrett has a fantastic open source tool called Granary that “Fetches and converts data between social networks, HTML and JSON with microformats2, ActivityStreams 1 and 2, AtomRSSJSON Feed, and more.” This could be a solid piece of a bigger process that pulls from multiple sources, converts them into a common format, and outputs them in a single subscribe-able location.

Splash page image and social logos from Granary.io

SubToMe

A big problem that has pushed us away from RSS and other formatted feed readers is providing an easy method of subscribing to content. Want to follow someone on Twitter? Just click a button and go. Wishing it were similar for a variety of feed types, Julien Genestoux‘s SubToMe has created a universal follow button that allows a one-click subscription option (with lots of flexibility and even bookmarklets) for following content feeds on the open web.

Splash image on SubToMe's home page

Others?

Have you seen any other writers/technologists who have solved this problem? Are there aggregation platforms that solve the problem in reverse? Small pieces that could be loosely joined into a better solution? What else am I missing?

How can we encourage more writers to take this work into their own hands to provide a cleaner solution for their audiences? Isn’t it in their own best interest to help their readers find their work?

I’ve curated portions of a journalism page on  IndieWeb wiki to include some useful examples, pointers, and resources that may help in solving portions of this problem. Other ideas and solutions are most welcome!

Replied to a tweet by Johannes ErnstJohannes Ernst (Twitter)
There is some pre-existing work for tips, recommendations, reviews, etc. But it would be nice to have an IndieNews sort of hub to aggregate them all.

Maybe I could start by making the first recommendation to use IndieWeb.xyz/en/recommendations

Cleaning up feeds, easier social following, and feed readers

I’ve been doing a bit of clean up in my feed reader(s)–cleaning out dead feeds, fixing broken ones, etc. I thought I’d take a quick peek at some of the feeds I’m pushing out as well. I remember doing some serious updates on the feeds my site advertises three years ago this week, but it’s been a while since I’ve revisited it. While every post kind/type, category, and tag on my site has a feed (often found by simply adding /feed/ to the end of those URLs), I’ve made a few custom feeds for aggregated content.

However, knowing that some feeds are broadly available from my site isn’t always either obvious or the same as being able to use them easily–one might think of it as a(n) (technical) accessibility problem. I thought I’d make a few tweaks to smooth out that user interface and hopefully provide a better user experience–especially since I’m publishing everything from my website first rather than in 30 different places online (which is a whole other UI problem for those wishing to follow me and my content). Since most pages on my site have a “Follow Me” button (courtesy of SubToMe), I just needed to have a list of generally useful feeds to provide it. While SubToMe has some instructions for suggesting lists of feeds, I’ve never gotten it to work the way I expected (or feed readers didn’t respect it, I’m not sure which?) But since most feed readers have feed discovery built in as a feature, I thought I’d leverage that aspect. Thus I threw into the <head> of my website a dozen or so links from some of the most typical feeds people may be most interested in from my site. Now you can click on the follow button, choose your favorite feed reader, and then your reader should provide you with a large list of feeds which you might want to subscribe. These now broadly include the full feed, a comments feed, feeds for all the individual kinds (bookmarks, likes, favorites, replies, listens, etc.) but potentially more useful: a “microblog feed” of all my status-related updates and a “linkblog feed” for all my link-related updates (generally favorites, likes, reads, and bookmarks).

Some of these sub-feeds may be useful in some feed readers which don’t yet have the ability for you to choose within the reader what you’d like to see. I suspect that in the future social readers will allow you to subscribe to my primary firehose or comments feeds, which are putting out about 85 and 125 posts a week right now, and you’ll be able to subscribe to those, but then within their interface be able to choose individual types by means of filters to more quickly see what I’ve been bookmarking, reading, listening to or watching. Then if you want to curl up with some longer reads, filter by articles; or if you just want some quick hits, filter by notes. And of course naturally you’ll be able to do this sort of filtering across your network too. I also suspect some of them will build in velocity filters and friend-proximity filters so that you’ll be able to see material from people who don’t post as often highlighted or to see people’s content based on your personal rankings or categories (math friends, knitting circle, family, reading group, IndieWeb community, book club, etc.). I’ve recently been enjoying Kicks Condor’s FraidyCat reader which touches on some of this work though it’s not what most people would consider a full-featured feed reader but might think of as a filter/reader dashboard sort of product.

Perhaps sometime in the future I’ll write a bit of code so that each individual page on my site that you visit will provide feeds in the header for all the particular categories, tags, and post kinds that appear on that page?That might make a clever, and simple little plugin, though honestly that’s the sort of code I would expect CMSes like WordPress to provide out of the box. Of course, perhaps broader adoption of microformats and clever readers will obviate the need for all these bits?