👓 A Qualified Fail | Doc Searls

A Qualified Fail by Doc Searls (Doc Searls Weblog)
Power of the People is a great grabber of a headline, at least for me. But it’s a pitch for a report that requires filling out the form here on the right: You see a lot of these: invitations to put one’s digital ass on mailing list, just to get a report that should have been public in the first place, but isn’t so personal data can be harvested and sold or given away to God knows who. And you do more than just “agree to join” a mailing list. You are now what marketers call a “qualified lead” for countless other parties you’re sure to be hearing from.
Syndicated copies to:

Exactly five years ago to the day I was excited about the possibilities of Digg Reader:

Now they’ve announced they’re shutting down. It seems to me that from a UI perspective, they only put in a bare minimal amount of effort to build out their reader and ceased iterating it on the day it it opened.

This is the second reader shut down recently, but I’m more excited about the idea of Microsub and what it may mean to the future of feed readers.

Syndicated copies to:

Organizing my research related reading

There’s so much great material out there to read and not nearly enough time. The question becomes: “How to best organize it all, so you can read even more?”

I just came across a tweet from Michael Nielsen about the topic, which is far deeper than even a few tweets could do justice to, so I thought I’d sketch out a few basic ideas about how I’ve been approaching it over the last decade or so. Ideally I’d like to circle back around to this and better document more of the individual aspects or maybe even make a short video, but for now this will hopefully suffice to add to the conversation Michael has started.

Keep in mind that this is an evolving system which I still haven’t completely perfected (and may never), but to a great extent it works relatively well and I still easily have the ability to modify and improve it.

Overall Structure

The first piece of the overarching puzzle is to have a general structure for finding, collecting, triaging, and then processing all of the data. I’ve essentially built a simple funnel system for collecting all the basic data in the quickest manner possible. With the basics down, I can later skim through various portions to pick out the things I think are the most valuable and move them along to the next step. Ultimately I end up reading the best pieces on which I make copious notes and highlights. I’m still slowly trying to perfect the system for best keeping all this additional data as well.

Since I’ve seen so many apps and websites come and go over the years and lost lots of data to them, I far prefer to use my own personal website for doing a lot of the basic collection, particularly for online material. Toward this end, I use a variety of web services, RSS feeds, and bookmarklets to quickly accumulate the important pieces into my personal website which I use like a modern day commonplace book.


In general, I’ve been using the Inoreader feed reader to track a large variety of RSS feeds from various clearinghouse sources (including things like ProQuest custom searches) down to individual researcher’s blogs as a means of quickly pulling in large amounts of research material. It’s one of the more flexible readers out there with a huge number of useful features including the ability to subscribe to OPML files, which many readers don’t support.

As a simple example arXiv.org has an RSS feed for the topic of “information theory” at http://arxiv.org/rss/math.IT which I subscribe to. I can quickly browse through the feed and based on titles and/or abstracts, I can quickly “star” the items I find most interesting within the reader. I have a custom recipe set up for the IFTTT.com service that pulls in all these starred articles and creates new posts for them on my WordPress blog. To these posts I can add a variety of metadata including top level categories and lower level tags in addition to other additional metadata I’m interested in.

I also have similar incoming funnel entry points via many other web services as well. So on platforms like Twitter, I also have similar workflows that allow me to use services like IFTTT.com or Zapier to push the URLs easily to my website. I can quickly “like” a tweet and a background process will suck that tweet and any URLs within it into my system for future processing. This type of workflow extends to a variety of sites where I might consume potential material I want to read and process. (Think academic social services like Mendeley, Academia.com, Diigo, or even less academic ones like Twitter, LinkedIn, etc.) Many of these services often have storage ability and also have simple browser bookmarklets that allow me to add material to them. So with a quick click, it’s saved to the service and then automatically ported into my website almost without friction.

My WordPress-based site uses the Post Kinds Plugin which takes incoming website URLs and does a very solid job of parsing those pages to extract much of the primary metadata I’d like to have without requiring a lot of work. For well structured web pages, it’ll pull in the page title, authors, date published, date updated, synopsis of the page, categories and tags, and other bits of data automatically. All these fields are also editable and searchable. Further, the plugin allows me to configure simple browser bookmarklets so that with a simple click on a web page, I can pull its URL and associated metadata into my website almost instantaneously. I can then add a note or two about what made me interested in the piece and save it for later.

Note here, that I’m usually more interested in saving material for later as quickly as I possibly can. In this part of the process, I’m rarely ever interested in reading anything immediately. I’m most interested in finding it, collecting it for later, and moving on to the next thing. This is also highly useful for things I find during my busy day that I can’t immediately find time for at the moment.

As an example, here’s a book I’ve bookmarked to read simply by clicking “like” on a tweet I cam across late last year. You’ll notice at the bottom of the post, I’ve optionally syndicated copies of the post to other platforms to “spread the wealth” as it were. Perhaps others following me via other means may see it and find it useful as well?


At regular intervals during the week I’ll sit down for an hour or two to triage all the papers and material I’ve been sucking into my website. This typically involves reading through lots of abstracts in a bit more detail to better figure out what I want to read now and what I’d like to read at a later date. I can delete out the irrelevant material if I choose, or I can add follow up dates to custom fields for later reminders.

Slowly but surely I’m funneling down a tremendous amount of potential material into a smaller, more manageable amount that I’m truly interested in reading on a more in-depth basis.

Document storage

Calibre with GoodReads sync

Even for things I’ve winnowed down, there is still a relatively large amount of material, much of it I’ll want to save and personally archive. For a lot of this function I rely on the free multi-platform desktop application Calibre. It’s essentially an iTunes-like interface, but it’s built specifically for e-books and other documents.

Within it I maintain a small handful of libraries. One for personal e-books, one for research related textbooks/e-books, and another for journal articles. It has a very solid interface and is extremely flexible in terms of configuration and customization. You can create a large number of custom libraries and create your own searchable and sort-able fields with a huge variety of metadata. It often does a reasonable job of importing e-books, .pdf files, and other digital media and parsing out their meta data which prevents one from needing to do some of that work manually. With some well maintained metadata, one can very quickly search and sort a huge amount of documents as well as quickly prioritize them for action. Additionally, the system does a pretty solid job of converting files from one format to another, so that things like converting an .epub file into a .mobi format for Kindle are automatic.

Calibre stores the physical documents either in local computer storage, or even better, in the cloud using any of a variety of services including Dropbox, OneDrive, etc. so that one can keep one’s documents in the cloud and view them from a variety of locations (home, work, travel, tablet, etc.)

I’ve been a very heavy user of GoodReads.com for years to bookmark and organize my physical and e-book library and anti-libraries. Calibre has an exceptional plugin for GoodReads that syncs data across the two. This (and a few other plugins) are exceptionally good at pulling in missing metadata to minimize the amount that must be done via hand, which can be tedious.

Within Calibre I can manage my physical books, e-books, journal articles, and a huge variety of other document related forms and formats. I can also use it to further triage and order the things I intend to read and order them to the nth degree. My current Calibre libraries have over 10,000 documents in them including over 2,500 textbooks as well as records of most of my 1,000+ physical books. Calibre can also be used to add document data that one would like to ultimately acquire the actual documents, but currently don’t have access to.

BibTeX and reference management

In addition to everything else Calibre also has some well customized pieces for dovetailing all its metadata as a reference management system. It’ll allow one to export data in a variety of formats for document publishing and reference management including BibTex formats amongst many others.

Reading, Annotations, Highlights

Once I’ve winnowed down the material I’m interested in it’s time to start actually reading. I’ll often use Calibre to directly send my documents to my Kindle or other e-reading device, but one can also read them on one’s desktop with a variety of readers, or even from within Calibre itself. With a click or two, I can automatically email documents to my Kindle and Calibre will also auto-format them appropriately before doing so.

Typically I’ll send them to my Kindle which allows me a variety of easy methods for adding highlights and marginalia. Sometimes I’ll read .pdf files via desktop and use Adobe to add highlights and marginalia as well. When I’m done with a .pdf file, I’ll just resave it (with all the additions) back into my Calibre library.

Exporting highlights/marginalia to my website

For Kindle related documents, once I’m finished, I’ll use direct text file export or tools like clippings.io to export my highlights and marginalia for a particular text into simple HTML and import it into my website system along with all my other data. I’ve briefly written about some of this before, though I ought to better document it. All of this then becomes very easily searchable and sort-able for future potential use as well.

Here’s an example of some public notes, highlights, and other marginalia I’ve posted in the past.


Eventually, over time, I’ve built up a huge amount of research related data in my personal online commonplace book that is highly searchable and sortable! I also have the option to make these posts and pages public, private, or even password protected. I can create accounts on my site for collaborators to use and view private material that isn’t publicly available. I can also share posts via social media and use standards like webmention and tools like brid.gy so that comments and interactions with these pieces on platforms like Facebook, Twitter, Google+, and others is imported back to the relevant portions of my site as comments. (I’m doing it with this post, so feel free to try it out yourself by commenting on one of the syndicated copies.)

Now when I’m ready to begin writing something about what I’ve read, I’ve got all the relevant pieces, notes, and metadata in one centralized location on my website. Synthesis becomes much easier. I can even have open drafts of things as I’m reading and begin laying things out there directly if I choose. Because it’s all stored online, it’s imminently available from almost anywhere I can connect to the web. As an example, I used a few portions of this workflow to actually write this post.

Continued work

Naturally, not all of this is static and it continues to improve and evolve over time. In particular, I’m doing continued work on my personal website so that I’m able to own as much of the workflow and data there. Ideally I’d love to have all of the Calibre related piece on my website as well.

Earlier this week I even had conversations about creating new post types on my website related to things that I want to read to potentially better display and document them explicitly. When I can I try to document some of these pieces either here on my own website or on various places on the IndieWeb wiki. In fact, the IndieWeb for Education page might be a good place to start browsing for those interested.

One of the added benefits of having a lot of this data on my own website is that it not only serves as my research/data platform, but it also has the traditional ability to serve as a publishing and distribution platform!

Currently, I’m doing most of my research related work in private or draft form on the back end of my website, so it’s not always publicly available, though I often think I should make more of it public for the value of the aggregation nature it has as well as the benefit it might provide to improving scientific communication. Just think, if you were interested in some of the obscure topics I am and you could have a pre-curated RSS feed of all the things I’ve filtered through piped into your own system… now multiply this across hundreds of thousands of other scientists? Michael Nielsen posts some useful things to his Twitter feed and his website, but what I wouldn’t give to see far more of who and what he’s following, bookmarking, and actually reading? While many might find these minutiae tedious, I guarantee that people in his associated fields would find some serious value in it.

I’ve tried hundreds of other apps and tools over the years, but more often than not, they only cover a small fraction of the necessary moving pieces within a much larger moving apparatus that a working researcher and writer requires. This often means that one is often using dozens of specialized tools upon which there’s a huge duplication of data efforts. It also presumes these tools will be around for more than a few years and allow easy import/export of one’s hard fought for data and time invested in using them.

If you’re aware of something interesting in this space that might be useful, I’m happy to take a look at it. Even if I might not use the service itself, perhaps it’s got a piece of functionality that I can recreate into my own site and workflow somehow?

If you’d like help in building and fleshing out a system similar to the one I’ve outlined above, I’m happy to help do that too.

Related posts

Syndicated copies to:

👓 Medium Acquires Superfeedr by Julien Genestoux

Medium Acquires Superfeedr by Julien Genestoux (ouvre-boite.com)
Today’s web is very different from what it was 8 years ago. We’ve said it several times: publishing and consuming content are new frontiers for most of the web giants like Facebook, Google or Apple. We consume the web from mobile devices, we discover content on silo-ed social networks and, more importantly, the base metaphor for the web is shifting from “space” to “time”. Superfeedr, the open web’s leading feed API and PubSubHubbub hub has been an independent player for 8 years. Superfeedr exists in order to enable people to exchange information on the web more freely and easily. Today, we’re excited to announce Superfeedr has been acquired by Medium. In many ways, it’s a very natural fit: Medium wants to create the best place to publish, distribute and consume content on the web. Together, we are hoping to keep Medium the company a leader in good industry practices, and Medium the network a place where this conversation can gain even more traction.

🎧 Micro.blog on Social Media with Manton Reece | Geekspeak

Micro.blog on Social Media with Manton Reece by Lyle Troxell and Brian Young from GeekSpeak
We have been talking about the problems with Twitter, Facebook, and social media throughout the last year. Our guest has too, and he’s trying to do something about it. Manton Reece, talks about Micro.blog, the technology it is built on, and how he is being thoughtful about building something new.

Syndicated copies to:

📅 Domain of One’s Own Workshop for Admins

Might be attending Domain of One's Own Workshop for Admins
After hearing from a number of schools running Domain of One’s Own, we thought it might be useful to host an in-person workshop that focuses specifically on implementing this project on your campus. Workshop of One’s Own is a two-day, geared towards the instructional technologist who assists with managing DoOO on an administrator level, but also focuses on project conceptualization, instructional uses, and empowering their community from a teaching/learning standpoint. You’ll not only be receiving the in-person, focused attention from the entire Reclaim Hosting team, but you’ll also get a chance to brainstorm with folks from other schools who are running their own Domain of One’s Own projects. We’ll work through common troubleshooting tips, SPLOTs with Alan Levine, cPanel application case studies, and more.

I’m almost painfully tempted to attend this workshop on March 15-16 with the idea of and setting up a side business to specialize in hosting WordPress and Known sites for IndieWeb use. While it could be a generic non-institutional instance for academics, researchers, post docs, graduate and undergraduate students who don’t have a “home” DoOO service, it could also be a potential landing pad for those leaving other DoOO projects upon graduation or moving. Naturally I wouldn’t turn down individuals who wanted specific IndieWeb capable personal websites either.

Either way it’s an itch (at an almost poison ivy level) that I’ve been having for a long time, but haven’t written down until now. It would certainly be an interesting platform for continuing to evangelize the overlap of IndieWeb and Educational applications on the internet.

I think there are almost enough IndieWeb friendly WordPress themes to make it a worthwhile idea to have a multi-site WordPress install that has a handful of microformats performant themes in conjunction with tools like webmentions and micropub that allows easy interaction with most of the major social silos.

I think the community might almost be ready for such a platform that would allow an integrated turnkey IndieWeb experience. (Though I’d still want to offer some type of integrated feed reader experience bundled in with it.) Perhaps I could model it a little bit after edublogs and micro.blog?

Who wants to help goad me into it?


Syndicated copies to:

🎧 Gillmor Gang: Doc Soup | TechCrunch

Gillmor Gang: Doc Soup by Doc Searls, Keith Teare, Frank Radice, and Steve Gillmor from TechCrunch
Recorded live Saturday, May 13, 2017. The Gang takes nothing off the table as Doc describes a near future of personal APIs and CustomerTech.

Syndicated copies to:

sub·men·tion (noun informal): 1. A post about someone or something on a personal website where one neglects (accidentally or on purpose) to either send a webmention and/or syndicate a copy out to an appropriate social silo. 2. Such a post which explicitly has the experimental microformat rel=”nomention” which prevents webmention code from triggering for the attached URL. 3. Any technologically evolved form of apophasis (Greek ἀπόφασις from ἀπόφημι apophemi, “to say no”) which sends no notifications using standard Internet or other digital protocols.

Early 21st century: a blend or portmanteau of subliminal and webmention.

submention /ˈsʌbˈmɛn(t)ʃ(ə)n/


Syndicated copies to:

Fragmentions for Better Highlighting and Direct References on the Web


Ages ago I added support on my website for fragmentions.

Wait… What is that?

Fragmention is a portmanteau word made up of fragment and mention (or even Webmention), but in more technical terms, it’s a simple way of creating a URL that not only targets a particular page on the internet, but allows you to target a specific sub-section of that page whether it’s a photo, paragraph, a few words, or even specific HTML elements like <div> or <span> on such a page. In short, it’s like a permalink to content within a web page instead of just the page itself.

A Fragmention Example

Picture of a hipster-esque looking Lego toy superimposed with the words: I'm not looking for a "hipster-web", but a new and demonstrably better web.
29/1.2014 – Larry the Barista by julochka is licensed under CC BY-NC
Feature image for the post “Co-claiming and Gathering Together – Developing Read Write Collect” by Aaron Davis. Photo also available on Flickr.

Back in December Aaron Davis had made a quote card for one of his posts that included a quote from one of my posts. While I don’t think he pinged (or webmentioned) it within his own post, I ran across it in his Twitter feed and he cross-posted it to his Flickr account where he credited where the underlying photo and quote came from along with their relevant URLs.

Fragmentions could have not only let him link to the source page of the quote, it would have let him directly target the section or the paragraph where the quote originated or–even more directly–the actual line of the quote.

Here’s the fragmention URL that would have allowed him to do that: http://boffosocko.com/2017/10/27/reply-to-laying-the-standards-for-a-blogging-renaissance-by-aaron-davis/#I%E2%80%99m%20not%20looking

Go ahead and click on it (or the photo) to see the fragmention in action.

What’s happening?

Let’s compare the two URLs:
1. http://boffosocko.com/2017/10/27/reply-to-laying-the-standards-for-a-blogging-renaissance-by-aaron-davis/
2. http://boffosocko.com/2017/10/27/reply-to-laying-the-standards-for-a-blogging-renaissance-by-aaron-davis/#I%E2%80%99m%20not%20looking

They both obviously point to the same specific page, and their beginnings are identical. The second one has a # followed by the words “I’m not looking” with some code for blank spaces and an apostrophe. Clicking on the fragmention URL will take you to the root page which then triggers a snippet of JavaScript on my site that causes the closest container with the text following the hash to be highlighted in a bright yellow color. The browser also automatically scrolls down to the location of the highlight.

Note: rather than the numbers and percent symbols, one could also frequently use the “+” to stand in for white spaces like so: http://boffosocko.com/2017/10/27/reply-to-laying-the-standards-for-a-blogging-renaissance-by-aaron-davis/#not+looking+for+just This makes the URL a bit more human readable. You’ll also notice I took out the code for the apostrophe by omitting the word “I’m” and adding another word or two, but I still get the same highlight result.

This can be a very useful thing, particularly on pages with huge amounts of text. I use it quite often in my own posts to direct people to particular sub-parts of my website to better highlight the pieces I think they’ll find useful.

It can be even more useful for academics and researchers who want to highlight or even bookmark specific passages of text online. Those with experience on the Medium.com platform will also notice how useful highlighting can be, but having a specific permalink structure for it goes a step further.

I will note however, that it’s been rare, if ever, that anyone besides myself has used this functionality on my site. Why? We’ll look at that in just a moment.

Extending fragmentions for easier usability.

Recently as a result of multiple conversations with Aaron Davis (on and between our websites via webmention with syndication to Twitter), I’ve been thinking more about notes, highlights, and annotations on the web. He wrote a post which discusses “Page Bookmarks” which are an interesting way of manually adding anchors on web pages to allow for targeting specific portions of web pages. This can make it easy for the user to click on links on a page to let them scroll up and down specific pages.  Sadly, these are very painful to create and use both for a site owner and even more so for the outside public which has absolutely no control over them whatsoever.

His post reminded me immediately of fragmentions. It also reminded me that there was a second bit of user interface related to fragmentions that I’d always meant to also add to my site, but somehow never got around to connecting: a “fragmentioner” to make it more obvious that you could use fragmentions on my site.

In short, how could a user know that my website even supports fragmentions? How could I make it easier for them to create a fragmention from my site to share out with others? Fortunately for me, our IndieWeb friend Kartik Prabhu had already wired up the details for his own personal website and released the code and some pointers for others who were interested in setting it up themselves. It’s freely available on Github and includes some reasonable details for installation.

So with a small bit of tweaking and one or two refinements, I got the code up and running and voilà! I now have a natural UI for highlighting things.


When a user naturally selects a portion of my page with their mouse–the way they might if they were going to cut and paste the text, a simple interface pops up with instructions to click it for a link. Kartik’s JavaScript automatically converts the highlight into the proper format and changes the page’s URL to include the appropriate fragmention URL for that snippet of the page. A cut and paste allows the reader to put that highlighted piece’s URL anywhere she likes. It

text highlighted in a browser with a small chain icon and text which says "Click for link to text"
Highlighting text pulls up some simple user interface for creating a fragmention to the highlighted text.

The future

What else would be nice?

I can’t help but think that it would be fantastic if the WordPress Fragmention plugin added the UI piece for highlight and sharing text via an automatically generated link.

Perhaps in the future one could allow a highlight and click interaction not only get the link, but to get a copy of both the highlighted text and the link to the URL. I’ve seen this behavior on some very socially savvy news websites. This would certainly make a common practice of cutting and pasting content much easier to do while also cleverly including a reference link.

The tough part of this functionality is that it’s only available on websites that specifically enable it. While not too difficult, it would be far nicer to have native browser support for both fragmention creation and use.  This would mean that I don’t need to include the JavaScript on my website to do the scrolling or highlighting and I wouldn’t need any JavaScript on my site to enable the highlighting to provide the specific code for the custom URL. How nice would it be if this were an open web standard and supported by major browsers without the need for work at the website level?

Medium-like highlighting and comments suddenly become a little easier for websites to support. With some additional code, it’s only a hop, skip, and a jump to dovetail this fragmention functionality with the W3C Webmentions spec to allow inline marginalia on posts. One can create a fragmention targeting text on a website and write a reply to it. With some UI built out,  by sending a webmention to the site, it could pick up the comment and display it as a marginal note at that particular spot instead of as a traditional comment below the post where it might otherwise loose the context of being associated at the related point in the main text. In fact our friend Kartik Prabhu has done just this on his website. Here’s an example of it in his post announcing the feature.

Example of inline marginalia on Kartik Prabhu’s website “Parallel Transport”.

You’ll notice that small quotation bubbles appear at various points in the text indicating marginalia. By clicking on them, the bubble turns green and the page expands to show the comment at that location. One could easily imagine CSS that allows the marginalia to actually display in the margin of the page for wider screens.

How could you imagine using fragmentions? What would you do with them? Feel free to add your thoughts below or own your site and send me a webmention.

Syndicated copies to:

Reply to What Was Known by Jim Groom

What Was Known by Jim Groom (bavatuesdays)
...the issue for me was Known was contextless for social media. I often post across various sites in response to things and share my photos as part of a conversation, so doing it through Known seemed a bit like working in a vacuum. I use Twitter less and less for discussion, so I wonder if I would feel different about this now, but what I wanted from Known was a way to also view and respond to Tweets, Facebooks statuses, photos on Flickr, Instagram, etc. A kind of reader for my content that would collapse those various conversations for me, and I could respond through my Known as if I was within those apps. I increasingly thought Known would make an awesome read//write feed reader if it had such a feature. The main reason Known fell by the wayside for me was I was not using it to publish in all these spaces, rather doing it post-facto if at all. Does that make sense?

Interestingly, Known had a lot of these features hidden in code under the hood. Sadly they weren’t all built out. It in fact, did have much of a reader (something which Ben indicated they were going to take out of the v1.0 release to slim down the code since it wasn’t being used). It also had a follow/following block of code (and even a bookmarklet at /account/settings/following) so you could follow specific sites and easily add them to your reader. Also unbeknownst to most was a built-in notifications UI which could have been found at /account/notifications.

It’s a shame that they put many of these half-built features on hold in their pivot to focus on the education market and creating a viable cash flow based company as this is the half that most CMSs lack. (If you think about what makes Twitter and Facebook both popular and really simple, I think it is that they’re 95% excellent feed readers with 5% built-in posting interfaces.)

I’ve managed to replace some of that missing functionality with Woodwind, a reader at http://woodwind.xyz, which one could connect with Known to do the reading and then integrate the posting, commenting, and replies to complete the loop. I do have a few very serious developer friends who are endeavoring to make this specific feed reader portion of the equation much easier to implement (and even self-host) to make the hurdle of this problem far lower, but I suspect it’ll be another 3-6 months before a usable product comes out of the process. For those looking to get more social into their feed readers, I often recommend Ryan Barrett’s appspot tools including https://twitter-atom.appspot.com/ which has instructions for extracting content from Twitter via Atom/RSS. It includes links at the bottom of the page for doing similar things with Facebook, Instagram and Google+ as well.

Interestingly there are now enough moving pieces (plugins) in the WordPress community to recreate all of the functionality Known has, one just needs to install them all separately and there are even a few different options for various portions depending on one’s needs. This includes adding reply contexts for social media as well as  both the ability to syndicate posts to multiple social sites for interaction as well as getting the comments, etc. backfeed from those social sites back into the comments section of your post the way Known did. Sadly, the feed reader problem still exists, but it may soon be greatly improved.

Syndicated copies to:

Everyday Carry December 2017

I joined yet another silo. It’s really only for some research on posts and pages related to common topics like “What I’m using”, “What I’m Carrying”, “Everyday Carry”, etc.

I’ve seen interview sites related to some of these (and even YouTube channels) as well as individual posts, but Everyday Carry is the one of the first silos I’ve seen dedicated to the topic. It’s very male focused and people seem to carry lots of knives and tactical pens (who knew this was a category?). Their business model seems to be sales oriented including ads and Amazon affiliate links, but it’s an interesting concept with pretty solid execution. It seems to be an uber-niche version of the original incarnation of gdgt.com which this is very similar to, but gdgt eventually morphed into something else.

I will say that the visual presentation is rather stunning and intriguing, though in practice some of the mouse-overs don’t always work as well as one would expect.

There is a somewhat prurient nature to seeing what people are carrying, though this incarnation makes it overly obvious that the collections are all-too-curated. It’s definitely not the sort of bum-rush sort with potentially embarrassing video I’ve seen before on YouTube.

I’m including below an embedded version of my post which includes some of their native UI, which seems pretty slick for such a site.

Syndicated copies to:

🔖 Stamp for music playlist portability

Stamp | FREE YOUR MUSIC (Stamp)
Stamp moves tracks and playlists across various services - Apple Music, Spotify, Google Music and others!

How awesome this portends to be! I’ve been wanting this type of functionality for a long time. I’m curious how long it stays up?

Too many music services don’t make it easy to transport your playlists as it’s one of the methods they use to lock you into their service (and their recurring subscription fees). It looks like it supports .csv formats, but it would be nice if there were a better standardized data format to let users own all of their own data. How great would it be if I could maintain my own playlist on my own website and then authorize services to access it to play what I wanted? Then I could have one central repository and take it to any subscription service out there.

It looks like it supports Spotify, Apple Music, Google Music, Pandora (Pro only?), Amazon Music, Groove, YouTube, rdio, Deezer, and Tidal.

Stamp | Free Your Music

Syndicated copies to:

Norm Peterson on Cheers invented the symbol for Bitcoin

Interestingly it didn't stand for digital currency, but a more familiar liquid one.

In the cold opening of Cheers, Season 9, Episode 23 “Carla Loves Clavin” aired on March 21, 1991, Norm Peterson (portrayed by George Wendt) invents the original definition of what would ultimately be adopted as the iconic symbol for Bitcoin. Interestingly at the time it didn’t stand for digital currency, but a more familiar liquid one.

Norm: Okay Rebecca. Um. Here’s the deal, I’ll paint the whole office including woodwork, and uh, it’ll run you 400.
Rebecca: 400 bucks sounds reasonable.
Norm: Oh no, that’s 400 beers, the B with the slanty line through it, it’s kinda my own special currency.

Norm invents the definition of the letter B with a slash through it. Hint: It doesn’t mean Bitcoin.

(Featured image credit: Jason Benjamin)

Syndicated copies to:

🔖 Back to the Future: The Decentralized Web, a report by Digital Currency Initiative & Center for Civic Media

Back to the Future: The Decentralized Web, A report by the Digital Currency Initiative and the Center for Civic Media (Digital Currency Initiative / MIT Media Lab)
The Web is a key space for civic debate and the current battleground for protecting freedom of expression. However, since its development, the Web has steadily evolved into an ecosystem of large, corporate-controlled mega-platforms which intermediate speech online. In many ways this has been a positive development; these platforms improved usability and enabled billions of people to publish and discover content without having to become experts on the Web’s intricate protocols. But in other ways this development is alarming. Just a few large platforms drive most traffic to online news sources in the U.S., and thus have enormous influence over what sources of information the public consumes on a daily basis. The existence of these consolidated points of control is troubling for many reasons. A small number of stakeholders end up having outsized influence over the content the public can create and consume. This leads to problems ranging from censorship at the behest of national governments to more subtle, perhaps even unintentional, bias in the curation of content users see based on opaque, unaudited curation algorithms. The platforms that host our networked public sphere and inform us about the world are unelected, unaccountable, and often impossible to audit or oversee. At the same time, there is growing excitement around the area of decentralized systems, which have grown in prominence over the past decade thanks to the popularity of the cryptocurrency Bitcoin. Bitcoin is a payment system that has no central points of control, and uses a novel peer-to-peer network protocol to agree on a distributed ledger of transactions, the blockchain. Bitcoin paints a picture of a world where untrusted networks of computers can coordinate to provide important infrastructure, like verifiable identity and distributed storage. Advocates of these decentralized systems propose related technology as the way forward to “re-decentralize” the Web, by shifting publishing and discovery out of the hands of a few corporations, and back into the hands of users. These types of code-based, structural interventions are appealing because in theory, they are less corruptible and resistant to corporate or political regulation. Surprisingly, low-level, decentralized systems don’t necessarily translate into decreased market consolidation around user-facing mega-platforms. In this report, we explore two important ways structurally decentralized systems could help address the risks of mega-platform consolidation: First, these systems can help users directly publish and discover content directly, without intermediaries, and thus without censorship. All of the systems we evaluate advertise censorship-resistance as a major benefit. Second, these systems could indirectly enable greater competition and user choice, by lowering the barrier to entry for new platforms. As it stands, it is difficult for users to switch between platforms (they must recreate all their data when moving to a new service) and most mega-platforms do not interoperate, so switching means leaving behind your social network. Some systems we evaluate directly address the issues of data portability and interoperability in an effort to support greater competition.

Download .pdf

h/t Ethan Zuckerman
Related to http://boffosocko.com/2017/08/19/mastodon-is-big-in-japan-the-reason-why-is-uncomfortable-by-ethan-zuckerman/

Syndicated copies to: