Replied to a tweet by Jan Knorr (Twitter)
Reclipped is on my radar, but I haven’t experimented with it yet. For YouTube annotation, I quite like https://docdrop.org/ which dovetails w/ @Hypothes_is. For other online video I will often use their page annotations w/ timestamps/media fragments.
Read Social Attention: a modest prototype in shared presence by Matt Webb (Interconnected, a blog by Matt Webb)
My take is that the web could feel warmer and more lively than it is. Visiting a webpage could feel a little more like visiting a park and watching the world go by. Visiting my homepage could feel just a tiny bit like stopping by my home. And so to celebrate my blogging streak reaching one year, this week, I’m adding a proof of concept to my blog, something I’m provisionally calling Social Attention.
You had me at “select text”…

If somebody else selects some text, it’ll be highlighted for you. 

Suddenly social annotation has taken an interesting twist. @Hypothes_is better watch out! 😉
Annotated on March 28, 2021 at 10:03AM

How often have you been on the phone with a friend, trying to describe how to get somewhere online? Okay go to Amazon. Okay type in “whatever”. Okay, it’s the third one down for me…
This is ridiculous!
What if, instead, you both went to the website and then you could just say: follow me. 

There are definitely some great use cases for this.
Annotated on March 28, 2021 at 10:05AM

A status emoji will appear in the top right corner of your browser. If it’s smiling, there are other people on the site right now too. 

This is pretty cool looking. I’ll have to add it as an example to my list: Social Reading User Interface for Discovery.

We definitely need more things like this on the web.

It makes me wish the Reading.am indicator were there without needing to click on it.

I wonder how this sort of activity might be built into social readers as well?
Annotated on March 28, 2021 at 10:13AM

If I’m in a meeting, I should be able to share a link in the chat to a particular post on my blog, then select the paragraph I’m talking about and have it highlighted for everyone. Well, now I can. 

And you could go a few feet farther if you added [fragment](https://indieweb.org/fragmention) support to the site, then the browser would also autoscroll to that part. Then you could add a confetti cannon to the system and have the page rain down confetti when more than three people have highlighted the same section!
Annotated on March 28, 2021 at 10:18AM

I want the patina of fingerprints, the quiet and comfortable background hum of a library. 

A great thing to want on a website! A tiny hint of phatic interaction amongst internet denizens.
Annotated on March 28, 2021 at 10:20AM

What I’d like more of is a social web that sits between these two extremes, something with a small town feel. So you can see people are around, and you can give directions and a friendly nod, but there’s no need to stop and chat, and it’s not in your face. It’s what I’ve talked about before as social peripheral vision (that post is about why it should be build into the OS). 

I love the idea of social peripheral vision online.
Annotated on March 28, 2021 at 10:22AM

streak: New posts for 52 consecutive weeks. 

It’s kind of cool that he’s got a streak counter for his posts.
Annotated on March 28, 2021 at 10:24AM

Read Transclusion and Transcopyright Dreams (maggieappleton.com)

In 1965 Ted Nelson imagined a system of interactive, extendable text where words would be freed from the constraints of paper documents. This hypertext would make documents linkable.

Twenty years later, Tim Berners Lee took inspiration from Nelson's vision, as well as other narratives like Vannevar Bush's Memex, to create the World Wide Web. Hypertext came to life.

I love the layout and the fantastic live UI examples on this page.

There are a few missing pieces for the primacy of some of these ideas. The broader concept of the commonplace book predated Nelson and Bush by centuries and surely informed much (if not all) of their thinking about these ideas. It’s assuredly the case that people already had the ideas either in their heads or written down and the links between them existed only in their minds or to some extent in indices as can be found in the literature—John Locke had a particularly popular index method that was widely circulated.

The other piece I find missing is a more historical and anthropological one which Western culture has wholly discounted until recently. There’s a pattern around the world of indigenous peoples in primarily oral cultures using mnemonic techniques going back at least 40,000 years. Many of these techniques were built into daily life in ways heretofore unimagined in modern Western Culture, but which are a more deeply layered version of transclusion imagined here. In some sense they transcluded almost all of their most important knowledge into their daily lives. The primary difference is that all the information was stored visually and associatively in the minds of people rather than on paper (through literacy) or via computers. The best work I’ve seen on the subject is Lynne Kelly’s Knowledge and Power in Prehistoric Societies: Orality, Memory and the Transmission of Culture which has its own profound thesis and is underpinned by a great deal of archaeologic and anthropologic primary research. Given its density I recommend her short lecture Modern Memory, Ancient Methods which does a reasonable job of scratching the surface of these ideas.

Another fantastic historical precursor of these ideas can be found in ancient Jewish writings like the Mishnah which is often presented as an original, more ancient text surrounded by annotated interpretations which are surrounded by other re-interpretations on the same page. Remi Kalir and Antero Garcia have a good discussion of this in their book Annotation (MIT Press, 2019).

page of Jewish text with Mishnah in the center and surrounded by various layers of commentary in succeding blocks around it
Image of a super-annotated page of Torah from chapter 3 of Annotation (MIT Press, 2019) by R. Kalir and A. Garcia

It would create a more layered and nuanced form of hypertext – something we’re exploring in the Digital Gardening movement. We could build accumulative, conversational exchanges with people on the level of the word, sentence, and paragraph, not the entire document. Authors could fix typos, write revisions, and push version updates that propogate across the web the same way we do with software. 

The Webmention spec allows for resending notifications and thus subsequent re-parsing and updating of content. This could be a signal sent to any links to the content that it had been updated and allow any translcuded pages to update if they wished.

Annotated on February 09, 2021 at 02:38PM

In this idealised utopia we obviously want to place value on sharing and curation as well as original creation, which means giving a small fraction of the payment to the re-publisher as well.We should note monetisation of all this content is optional. Some websites would allow their content to be transcluded for free, while others might charge hefty fees for a few sentences. If all goes well, we’d expect the majority of content on the web to be either free or priced at reasonable micro-amounts. 

While this is nice in theory, there’s a long road strewn with attempts at micropayments on the web. I see new ones every six months or so. (Here’s a recent one: https://www.youtube.com/playlist?list=PLqrvNoDE35lFDUv2enkaEKuo6ATBj9GmL)

This also dramatically misses the idea of how copyright and intellectual property work in many countries with regard to fair use doctrine. For short quotes and excerpts almost anyone anywhere can do this for free already. It’s definitely nice and proper to credit the original, but as a society we already have norms for how to do this.

Annotated on February 09, 2021 at 02:46PM

Transclusion would make this whole scenario quite different. Let’s imagine this again… 

Many in the IndieWeb have already prototyped this using some open web standards. It’s embodied in the idea of media fragments and fragmentions, a portmanteau of the words fragment and Webmention.

A great example can be found at https://www.kartikprabhu.com/articles/marginalia

This reminds me that I need to kick my own server to fix the functionality on my main site and potentially add it to a few others.

Annotated on February 09, 2021 at 02:59PM

We can easily imagine transclusions going the way of the public comments section. 

There are definitely ways around this, particularly if it is done by the site owner instead of enabled by a third party system like News Genius or Hypothes.is.

Examples of this in the wild can be found at https://indieweb.org/annotation#Annotation_Sites_Enable_Abuse.

Annotated on February 09, 2021 at 03:04PM

🔖 Timelinely

Bookmarked Timelinely (Timelinely)

Create interactive video stories on Timelinely. Timelinely empowers people to go beyond just video.

Highlight interesting parts of a video on a timeline with interactive comments, pictures, links, maps, other videos, and more.

This tool reminds me of a somewhat more commercialized version of Jon Udell’s Clipping tools for HTML5 audio, HTML5 video, and YouTube. I wonder if this is the sort of UI that Hypothes.is might borrow? I can definitely see it being useful functionality in the classroom.  

👓 Open web annotation of audio and video | Jon Udell

Read Open web annotation of audio and video by Jon UdellJon Udell (Jon Udell)
Text, as the Hypothesis annotation client understands it, is HTML, or PDF transformed to HTML. In either case, it’s what you read in a browser, and what you select when you make an annotation. What’s the equivalent for audio and video? It’s complicated because although browsers enable us to select passages of text, the standard media players built into browsers don’t enable us to select segments of audio and video. It’s trivial to isolate a quote in a written document. Click to set your cursor to the beginning, then sweep to the end. Now annotation can happen. The browser fires a selection event; the annotation client springs into action; the user attaches stuff to the selection; the annotation server saves that stuff; the annotation client later recalls it and anchors it to the selection. But selection in audio and video isn’t like selection in text. Nor is it like selection in images, which we easily and naturally crop. Selection of audio and video happens in the temporal domain. If you’ve ever edited audio or video you’ll appreciate what that means. Setting a cursor and sweeping a selection isn’t enough. You can’t know that you got the right intro and outro by looking at the selection. You have to play the selection to make sure it captures what you intended. And since it probably isn’t exactly right, you’ll need to make adjustments that you’ll then want to check, ideally without replaying the whole clip.
Jon Udell has been playing around with media fragments to create some new functionality in Hypothes.is. The nice part is that he’s created an awesome little web service for quickly and easily editing media fragments online for audio and video (including YouTube videos) which he’s also open sourced on GitHub.

I suspect that media fragments experimenters like Aaron Parecki, Marty McGuire, Kevin Marks, and Tantek Çelik will appreciate what he’s doing and will want to play as well as possibly extend it. I’ve already added some of the outline to the IndieWeb wiki page for media fragments (and a link to fragmentions) which has some of their prior work.

I too look forward to a day where web browsers have some of this standardized and built in as core functionality.

Highlights, Quotes, & Marginalia

Open web annotation of audio and video

This selection tool has nothing intrinsically to do with annotation. It’s job is to make your job easier when you are constructing a link to an audio or video segment.

I’m reminded of a JavaScript tool written by Aaron Parecki that automatically adds a start fragment to the URL of his page when the audio on the page is paused. He’s documented it here: https://indieweb.org/media_fragment


(If I were Virginia Eubanks I might want to capture the pull quote myself, and display it on my book page for visitors who aren’t seeing it through the Hypothesis lens.)

Of course, how would she know that the annotation exists? Here’s another example of where adding webmentions to Hypothesis for notifications could be useful, particularly when they’re more widely supported. I’ve outlined some of the details here in the past: http://boffosocko.com/2016/04/07/webmentions-for-improving-annotation-and-preventing-bullying-on-the-web/