A reflection on annotations and context at OLC Innovate & Liquid Margins

It’s fascinating to look back on the Langston Hughes’ “Theme for English B” that a group of us annotated this morning in the context of exploring Hypothes.is as part of OLC Innovate and Liquid Margins.

Reviewing over some notes, I’m glad I took a moment to annotate the context in which I made my annotations, which are very meta with respect to that context. Others’ annotations were obviously from the context of educators looking at Hughes’ work from the perspective of teachers looking back at an earlier time.

I’ve just gone back and not only re-read the poem, but read through and responded to some of the other annotations asynchronously. The majority of today’s annotations were made synchronously during the session. Others reading and interpreting them may be helped to know which were synchronous or asynchronous and from which contexts people were meeting the text. There were many annotations from prior dates that weren’t in the cohort of those found today. It would be interesting if the Hypothes.is UI had some better means of indicating time periods of annotation.

Is anyone studying these contextual aspects of digital annotation? I’ve come across some scholarship of commonplace books that attempt to contextualize notes within their historic time periods, but most of those attempts don’t have the fidelity of date and timestamps that Hypothes.is does. In fact, many of those attempts have no dates at all other than that they may have been made +/- a decade or two, which tends to cause some context collapse.

Crowdlaaers may provide some structure for studying these sorts of phenomenon: https://crowdlaaers.org?url=https://www.poetryfoundation.org/poems/47880/theme-for-english-b. It provides some time-based tools for viewing annotations to help provide context. Looking at it’s data, I’m particularly struck by how few people today took advantage of the ability to use taxonomies.

As always, it was fun to see and hear about some uses of annotation using Hypothes.is in the wild. Thanks again to Nate Angell, Remi Kalir, Jeremy Dean, and all of the other panelists and participants who spoke so well about how they’re using this tool.

Replied to a tweet by Hungry Bread ElevatorHungry Bread Elevator (Twitter)
Some of the off-label uses of Hypothes.is have been enumerated lately, including some I’ve mentioned.

I’ve tinkered a bit with CROWDLAAERS, but it’s always seemed to me geared toward a very niche audience including teachers potentially using it for grading? Perhaps I’m missing some more of its flexibility? Remi Kalir might be able to help elucidate it or indicate if he’s noticed anyone using it for off-label usage.

I might see it being more useful if one could analyze site-wide annotations on a domain with a wild-card search of this sort: https://tomcritchlow.com/*.

I have to imagine that it would be cool to see all the annotations and conversations across something like the New York Times with a data visualization tool like this.

Jon Udell and gang are aware of Webmention, but haven’t pulled the trigger (yet) on making the decision to build them in. I’ve outlined some methods for making their platform a bit more IndieWeb friendly by adding markup and some additional HTML to allow people to force the system to be able to send webmentions. I do frequently use Jon’s facet tool to check highlighting and annotation activity on my website.

I have found Crowdlaaers useful several times in that I’m aware that some pages are annotated, but they’re either not public or are part of other groups for which I’m not a member. An example of this is this page on my website which has one annotation which I can’t see, but by using Crowdlaaers, I can. Another example is viewing annotations on sites that have subsequently blocked Hypothes.is like this example. Of course, sometimes you’ll do this and find odd bugs floating around in the system.

 

Read about CROWDLAAERS (crowdlaaers.org)
Welcome to the “Crowd Layers” dashboard, a public service for ​Capturing and Reporting Open Web Data for Learning Analytics, Annotation, and Education Researchers ​(CROWDLAAERS). This real-time dashboard visualizes group – or ​crowd​ – discourse ​layers​ added via Hypothesis open web annotation to online documents. Developed by researchers at the University of Colorado Denver, CROWDLAAERS visualizes social learning analytics associated with open and collaborative web annotation. The dashboard has been iteratively designed in partnership with educators and researchers to support the social life of reading and the social life of documents across the open and annotated web.