Read Eliminating the Human by David ByrneDavid Byrne (MIT Technology Review)
We are beset by—and immersed in—apps and devices that are quietly reducing the amount of meaningful interaction we have with each other.
This piece makes a fascinating point about people and interactions. It’s the sort of thing that many in the design and IndieWeb communities should read and think about as they work.

I came to it via an episode of the podcast The Happiness Lab.

The consumer technology I am talking about doesn’t claim or acknowledge that eliminating the need to deal with humans directly is its primary goal, but it is the outcome in a surprising number of cases. I’m sort of thinking maybe it is the primary goal, even if it was not aimed at consciously.

Annotated on January 22, 2020 at 10:35AM

Most of the tech news we get barraged with is about algorithms, AI, robots, and self-driving cars, all of which fit this pattern. I am not saying that such developments are not efficient and convenient; this is not a judgment. I am simply noticing a pattern and wondering if, in recognizing that pattern, we might realize that it is only one trajectory of many. There are other possible roads we could be going down, and the one we’re on is not inevitable or the only one; it has been (possibly unconsciously) chosen.

Annotated on January 22, 2020 at 10:36AM

What I’m seeing here is the consistent “eliminating the human” pattern.

This seems as apt a name as any.
Annotated on January 22, 2020 at 10:39AM

“Social” media: This is social interaction that isn’t really social. While Facebook and others frequently claim to offer connection, and do offer the appearance of it, the fact is a lot of social media is a simulation of real connection.

Perhaps this is one of the things I like most about the older blogosphere and it’s more recent renaissance with the IndieWeb idea of Webmentions, a W3C recommendation spec for online interactions? While many of the interactions I get are small nods in the vein of likes, favorites, or reposts, some of them are longer, more visceral interactions.

My favorite just this past week was a piece that I’d worked on for a few days that elicited a short burst of excitement from someone who just a few minutes later wrote a reply that was almost as long as my piece itself.

To me this was completely worth the effort and the work, not because of the many other smaller interactions, but because of the human interaction that resulted. Not to mention that I’m still thinking out a reply still several days later.

This sort of human social interaction also seems to be at the heart of what Manton Reece is doing with micro.blog. By leaving out things like reposts and traditional “likes”, he’s really creating a human connection network to fix what traditional corporate social media silos have done to us. This past week’s episode of Micro Monday underlines this for us. (#)
Annotated on January 22, 2020 at 10:52AM

Antonio Damasio, a neuroscientist at USC wrote about a patient he called Elliot, who had damage to his frontal lobe that made him unemotional. In all other respects he was fine—intelligent, healthy—but emotionally he was Spock. Elliot couldn’t make decisions. He’d waffle endlessly over details. ­Damasio concluded that although we think decision-­making is rational and machinelike, it’s our emotions that enable us to actually decide.

Annotated on January 22, 2020 at 10:56AM

And in the meantime, if less human interaction enables us to forget how to cooperate, then we lose our advantage.

It may seem odd, but I think a lot of the success of the IndieWeb movement and community is exactly this: a group of people has come together to work and interact and increase our abilities to cooperate to make something much bigger, more diverse, and more interesting than any of us could have done separately.
Annotated on January 22, 2020 at 10:58AM

Remove humans from the equation, and we are less complete as people and as a society.

Annotated on January 22, 2020 at 10:59AM

A version of this piece originally appeared on his website, davidbyrne.com.

This piece seems so philosophical, it seems oddly trivial that I see this note here and can’t help but think about POSSE and syndication.
Annotated on January 22, 2020 at 11:01AM

Cleaning up feeds, easier social following, and feed readers

I’ve been doing a bit of clean up in my feed reader(s)–cleaning out dead feeds, fixing broken ones, etc. I thought I’d take a quick peek at some of the feeds I’m pushing out as well. I remember doing some serious updates on the feeds my site advertises three years ago this week, but it’s been a while since I’ve revisited it. While every post kind/type, category, and tag on my site has a feed (often found by simply adding /feed/ to the end of those URLs), I’ve made a few custom feeds for aggregated content.

However, knowing that some feeds are broadly available from my site isn’t always either obvious or the same as being able to use them easily–one might think of it as a(n) (technical) accessibility problem. I thought I’d make a few tweaks to smooth out that user interface and hopefully provide a better user experience–especially since I’m publishing everything from my website first rather than in 30 different places online (which is a whole other UI problem for those wishing to follow me and my content). Since most pages on my site have a “Follow Me” button (courtesy of SubToMe), I just needed to have a list of generally useful feeds to provide it. While SubToMe has some instructions for suggesting lists of feeds, I’ve never gotten it to work the way I expected (or feed readers didn’t respect it, I’m not sure which?) But since most feed readers have feed discovery built in as a feature, I thought I’d leverage that aspect. Thus I threw into the <head> of my website a dozen or so links from some of the most typical feeds people may be most interested in from my site. Now you can click on the follow button, choose your favorite feed reader, and then your reader should provide you with a large list of feeds which you might want to subscribe. These now broadly include the full feed, a comments feed, feeds for all the individual kinds (bookmarks, likes, favorites, replies, listens, etc.) but potentially more useful: a “microblog feed” of all my status-related updates and a “linkblog feed” for all my link-related updates (generally favorites, likes, reads, and bookmarks).

Some of these sub-feeds may be useful in some feed readers which don’t yet have the ability for you to choose within the reader what you’d like to see. I suspect that in the future social readers will allow you to subscribe to my primary firehose or comments feeds, which are putting out about 85 and 125 posts a week right now, and you’ll be able to subscribe to those, but then within their interface be able to choose individual types by means of filters to more quickly see what I’ve been bookmarking, reading, listening to or watching. Then if you want to curl up with some longer reads, filter by articles; or if you just want some quick hits, filter by notes. And of course naturally you’ll be able to do this sort of filtering across your network too. I also suspect some of them will build in velocity filters and friend-proximity filters so that you’ll be able to see material from people who don’t post as often highlighted or to see people’s content based on your personal rankings or categories (math friends, knitting circle, family, reading group, IndieWeb community, book club, etc.). I’ve recently been enjoying Kicks Condor’s FraidyCat reader which touches on some of this work though it’s not what most people would consider a full-featured feed reader but might think of as a filter/reader dashboard sort of product.

Perhaps sometime in the future I’ll write a bit of code so that each individual page on my site that you visit will provide feeds in the header for all the particular categories, tags, and post kinds that appear on that page?That might make a clever, and simple little plugin, though honestly that’s the sort of code I would expect CMSes like WordPress to provide out of the box. Of course, perhaps broader adoption of microformats and clever readers will obviate the need for all these bits?

 

Listened to John Stewart by Terry Greene from Gettin' Air The Open Pedagogy Podcast | voicEd

In this episode Terry Greene chats with @JohnStewartPhD, Assistant Director for the Office of Digital Learning at the University of Oklahoma. The main topic of discussion is the wonderfully successful Domain of One’s Own project, OU Create, which has produced thousands of openly shared web sites and blogs from students and faculty across the University.

Cover art for Gettin' Air

We definitely need another hour or two of this interview with John. I like the idea behind some of the highlighting work they’re doing with OU Create and their weekly updates. We need more of this in the Domains space. I wonder if they’ve experimented with a Homebrew Website Club sort of experience in their Domains practice?

Terry definitely has mentioned show notes with links, but I’m beginning to wonder if I should be following a different feed because I’m not seeing any of the great links I was hoping for recently from these episodes?

Replied to LA Roadshow Recap by Jim GroomJim Groom (bavatuesdays)

10 days ago I was sitting in a room in Los Angeles with 12 other folks listening to Marie SelvanadinSundi Richard, and Adam Croom talk about work they’re doing with Domains, and it was good! That session was followed by Peter Sentz providing insight on how BYU Domains provides and supports top-level domains and hosting for over 10,000 users on their campus. And first thing that Friday morning Lauren and I kicked the day off by highlighting Tim Clarke’s awesome work with the Berg Builds community directory as well as Coventry Domains‘s full-blown frame for a curriculum around Domains with Coventry Learn. In fact, the first 3 hours of Day 2 were a powerful reminder of just how much amazing work is happening at the various schools that are providing the good old world wide web as platform to their academic communities. 

https://roadshow.reclaimhosting.com/LA/

I’m still bummed I couldn’t make it to this event…

One of the questions that came up during the SPLOT workshop is if there’s a SPLOT for podcasting, which reminded me of this post Adam Croom wrote a while back about his podcasting workflow: “My Podcasting Workflow with Amazon S3.” . We’re always on the look-out for new SPLOTs to bring to the Reclaim masses, and it would be cool to have an example that moves beyond WordPress just to make the point a SPLOT is not limited to WordPress (as much as we love it) —so maybe Adam and I can get the band back together.

I just outlined a tiny and relatively minimal/free way to host and create a podcast feed last night: https://boffosocko.com/2019/12/17/55761877/

I wonder if this could be used to create a SPLOT that isn’t WordPress based potentially using APIs from the Internet Archive and Huffduffer? WordPress-based infrastructure could be used to create it certainly and aggregation could be done around tags. It looks like the Huffduffer username SPLOT is available.
–annotated December 17, 2019 at 10:46AM

Read Mothering Digital by Nate AngellNate Angell (xolotl.org)
Today folks are gathered at the Computer History Museum in Mountain View, California to celebrate the 50th anniversary of the Mother of All Demos (“MOAD”), a notorious event held in 1968 in San Francisco’s Civic Auditorium, where SRI’s Douglas Engelbart and others demonstrated computer systems they were developing and which many folks point to as one of the most important events to presage and shape our digital technology environment today.

👓 The Web Falls Apart | Baldur Bjarnason

Read The Web Falls Apart by Baldur Bjarnason (Baldur Bjarnason)
The web's circle has expanded to contain the entire world. But the centre is not holding.
I get where Baldur is coming from and I’m watching the area relatively closely, but I’m just not seeing the thesis from my perspective.

👓 LO, The Internet Turned 50 Today | Interdependent Thoughts

Read LO, The Internet Turned 50 Today by Ton Zijlstra (zylstra.org)
The first message was sent from one computer to another over ARPANET on October 29th at 22:30. ‘LO’ for Login, but then the computer crashed as Charley S Kline typed the G. Famous first words. Leonard Kleinrock describes the events that led to that first internet message in a blogpost. I was bor...
Note to self: get a picture of the logbook and release it with a more permissive CC attribution.

📺 EDUCE: Imaging the Herculaneum Scrolls | YouTube

Watched Imaging the Herculaneum Scrolls from YouTube
The eruption of Mt. Vesuvius covered the city of Herculaneum in twenty meters of lava, simultaneously destroying the Herculaneum scrolls through carbonization and preserving the scrolls by protecting them from the elements. Unwrapping the scrolls would damage them, but researchers are anxious to read the texts. Researchers from the University of Kentucky collaborated with the Institut de France and SkyScan to digitally unwrap and preserve the scrolls. To learn more about the EDUCE project, go to http://cs.uky.edu/dri.
They haven’t finished the last mile, but having high resolution scans of the objects is great. I’m not sure why they’re handling these items manually when they could very likely be secured in better external casings and still imaged the same way.

🎧 Triangulation 413 David Weinberger: Everyday Chaos | TWiT.TV

Listened to Triangulation 413 David Weinberger: Everyday Chaos from TWiT.tv

Mikah Sargent speaks with David Weinberger, author of Everyday Chaos: Technology, Complexity, and How We’re Thriving in a New World of Possibility about how AI, big data, and the internet are all revealing that the world is vastly more complex and unpredictable than we've allowed ourselves to see and how we're getting acculturated to these machines based on chaos.

Interesting discussion of systems with built in openness or flexibility as a feature. They highlight Slack which has a core product, but allows individual users and companies to add custom pieces to it to use in the way they want. This provides a tremendous amount of addition value that Slack would never have known or been able to build otherwise. These sorts of products or platforms have the ability not only to create their inherent links, but add value by being able to flexibly create additional links outside of themselves or let external pieces create links to them.

Twitter started out like this in some sense, but ultimately closed itself off–likely to its own detriment.

Social Reading User Interface for Discovery

I read quite a bit of material online. I save “bookmarks” of all of it on my personal website, sometimes with some additional notes and sometimes even with more explicit annotations. One of the things I feel like I’m missing from my browser, browser extensions, and/or social feed reader is a social layer overlay that could indicate that people in my social network(s) have read or interacted directly with that page (presuming they make that data openly available.)

One of the things I’d love to see pop up out of the discovery explorations of the IndieWeb or some of the social readers in the space is the ability to uncover some of this social reading information. Toward this end I thought I’d collect some user interface examples of things that border on this sort of data to make the brainstorming and building of such functionality easier in the near future.

If I’m missing useful examples or you’d like to add additional thoughts, please feel free to comment below.

Examples of social reading user interface for discovery

Google

I don’t often search for reading material directly, but Google has a related bit of UI indicating that I’ve visited a website before. I sort of wish it had the ability to surface the fact that I’ve previously read or bookmarked an article or provided data about people in my social network who’ve done similarly within the browser interface for a particular article (without the search.) If a browser could use data from my personal website in the background to indicate that I’ve interacted with it before (and provide those links, notes, etc.), that would be awesome!

Screen capture for Google search of Kevin Marks with a highlight indicating that I've visited this page in the recent past
Screen capture for Google search of Kevin Marks with a highlight indicating that I’ve visited his page several times in the past. Given the March 2017 date, it’s obvious that the screen shot is from a browser and account I don’t use often.

I’ll note here that because of the way I bookmark or post reads on my own website, my site often ranks reasonably well for those things.

On a search for an article by Aaron Parecki, my own post indicating that I’ve read it in the past ranks second right under the original.

In some cases, others who are posting about those things (reading, commenting, bookmarking, liking, etc.) in my social network also show up in these sorts of searches. How cool would it be to have a social reader that could display this sort of social data based on people it knows I’m following

A search for a great article by Matthias Ott shows that both I and several of my friends (indicated by red arrows superimposed on the search query) have read, bookmarked, or commented on it too.

Hypothes.is

Hypothes.is is a great open source highlighting, annotation, and bookmarking tool with a browser extension that shows an indicator of how many annotations  appear on the page. In my experience, higher numbers often indicate some interesting and engaging material. I do wish that it had a follower/following model that could indicate my social sphere has annotated a page. I also wouldn’t mind if their extension “bug” in the browser bar had another indicator in the other corner to indicate that I had previously annotated a page!

Screen capture of Vannevar Bush’s article As We May Think in The Atlantic with a Hypothes.is browser extension bug indicating that there are 329 annotations on the page.

Reading.am

It doesn’t do it until after-the-fact, but Reading.am has a pop up overlay through its browser extension. It adds me to the list of people who’ve read an article, but it also indicates others in the network and those I’m following who have also read it (sometimes along with annotations about their thoughts).

What I wouldn’t give to see that pop up in the corner before I’ve read it!

Reading.am’s social layer creates a yellow colored pop up list in the upper right of the browser indicating who else has read the article as well as showing some of their notes on it. Unfortunately it doesn’t pop up until after you’ve marked the item as read.

Nuzzel

Nuzzel is one of my favorite tools. I input my Twitter account as well as some custom lists and it surfaces articles that people in my Twitter network have been tweeting about. As a result, it’s one of the best discovery tools out there for solid longer form content. Rarely do I read content coming out of Nuzzel and feel robbed. Because of how it works, it’s automatically showing those people in my network and some of what they’ve thought about it. I love this contextualization.

Nuzzel’s interface shows the title and an excerpt of an article and also includes the avatars, names, network, and commentary of one’s friends that interacted with the piece. In this example it’s relatively obvious that one reader influenced several others who retweeted it because of her.

Goodreads

Naturally sites for much longer form content will use social network data about interest, reviews, and interaction to a much greater extent since there is a larger investment of time involved. Thus social signaling can be more valuable in this context. A great example here is of Goodreads which shows me those in my network who are interested in reading a particular book or who have written reviews or given ratings.

A slightly excerpted/modified screen capture of the Goodreads page for Melanie Mitchell’s book Complexity that indicates several in my social network are also interested in reading it.

Are there other examples I’m missing? Are you aware of similar discovery related tools for reading that leverage social network data?

Watched How the medium shapes the message by Cesar Hidalgo from TEDxYouth@BeaconStreet | YouTube

How communication technologies shape our collective memory.

César A. Hidalgo is an assistant professor at the MIT Media Lab. Hidalgo’s work focuses on improving the understanding of systems by using and developing concepts of complexity, evolution, and network science; his goal is to help improve understanding of the evolution of prosperity in order to help develop industrial policies that can help countries raise the living standards of their citizens. His areas of application include economic development, systems biology, and social systems. Hidalgo is also a graphic-art enthusiast and has published and exhibited artwork that uses data collected originally for scientific purposes.

This talk was given at a TEDx event using the TED conference format but independently organized by a local community. Learn more at http://ted.com/tedx

Another Hypothes.is test. This time let’s throw a via.hypothes.is-based link (which seems to be the only way to and shove it all in) into an iframe! What will be orphaned? What will be native? Will annotating the iframed version push the annotations back to the original, will they show up as orphaned, or will they show up on the parent page of the iframe, or all of the above?

#D5CC5A; overflow: hidden; margin: 5px auto; max-width: 1000px;">

I also wonder if we could use fragments to target specific portions of pages like this for blockquoting/highlighting and still manage to get the full frame and Hypothes.is interface? Let’s give that a go too shall we? Would it be apropos to do a fragment quote from Fragmentions for Better Highlighting and Direct References on the Web?

#D5CC5A; overflow: hidden; margin: 5px auto; max-width: 1000px;">

Shazam!! That worked rather well didn’t it? And we can customize the size of the iframe container to catch all of the quote rather well on desktop at least. Sadly, most people’s sites don’t support fragmentions or have fragmentioner code running. It might also look like our fragment is causing my main page to scroll down to the portion of the highlighted text in the iframe. Wonder how to get around that bit of silliness?

And now our test is done.

Domains, power, the commons, credit, SEO, and some code implications

How to provide better credit on the web using the standard rel=“canonical” by looking at an example from the Open Learner Patchbook

A couple of weeks back, I noticed and began following Cassie Nooyen when I became aware of her at the Domains 2019 conference which I followed fairly closely online.

She was a presenter and wrote a couple of nice follow up pieces about her experiences on her website. I bookmarked one of them to read later, and then two days later I came across this tweet by Terry Green, who had also apparently noticed her post:

But I was surprised to see the link in the tweet points to a different post in the Open Learner Patchbook, which is an interesting site in and of itself.

This means that there are now at least two full copies of Cassie’s post online:

While I didn’t see a Creative Commons notice on Cassie’s original or any mention of permissions or even a link to the source of the original on the copy on the Open Patchbook, I don’t doubt that Terry asked Cassie for permission to post a copy of her work on his site. I’ll also suspect that it may have been the case that Cassie might not have wanted any attention drawn to herself or her post on her site and may have eschewed a link to it. I will note that the Open Patchbook did have a link to her Twitter presence as a means of credit. (I’ll still maintain that people should be preferring links to their own domain over Twitter for credits like these–take back your power!)

Even with these crediting caveats aside, there’s a subtle technical piece hiding here relating to search engines and search engine optimization that many in the Domain of One’s Own space may not realize exists, or if they do they may not be sure how to fix. This technical subtlety is that search engines attempt to assign proper credit too. As a result there’s a very high chance that Open Patchbook could rank higher in search for Cassie’s own post than Cassie’s original. As researchers and educators we’d obviously vastly prefer the original to get the credit. So what’s going on here?

Search engines use a web standard known as rel=“canonical”, a microformat which is most often found in the HTML <header> of a web page. If we view the current source of the copy on the Open Learner Patchbook, we’ll see the following:

<link rel="canonical" href="http://openlearnerpatchbook.org/technology/patch-twenty-five-my-domain-my-place-to-grow/" />

According to the Microformats wiki:

By adding rel=“canonical” to a hyperlink, a page indicates that the destination of that hyperlink should be considered the preferred or definitive version of the current page. This helps search engines avoid duplicate content, and is useful for deciding how to link to a page when citing it.

In the case of our example of Cassie’s post, search engines will treat the two pages as completely separate, but will suspect that one is a duplicate of the other. This could have dramatic consequences for one or the other sites in which search engines will choose one to prefer over the other, and, in some cases, search engines may penalize one site for having duplicate content and not stating that fact (in their metadata). Typically this would have more drastic and averse consequences for Cassie’s original in comparison with an institutional site. 

How do we fix the injustice of this metadata? 

There are a variety of ways, but I’ll focus on several in the WordPress space. 

WordPress core has built-in functionality that should set the permalink for a particular page as the canonical one. This is why the Open Patchbook page displays the incorrect canonical link. Since most people are likely to already have an SEO related plugin installed on their site and almost all of them have this capability, this is likely the quickest and easiest method for being able to change canonical links for pages and posts. Two popular choices for this are Yoast and All in One SEO which have simple settings for inputting and saving alternate canonical URLs. Yoast documents the steps pretty well, so I’ll provide an example using All in One SEO:

  • If not done already, click the checkbox for canonical URLs in the “General Settings” section for the plugin generally found at /wp-admin/admin.php?page=all-in-one-seo-pack%2Faioseop_class.php.
  • For the post (or page) in question, within the All in One SEO metabox in the admin interface (pictured) put the full URL of the original posts’ location.
  • (Re-)publish the post.

Screenshot of the AIOSEO metabox with the field for the Canonical URL outlined in red

If you’re using another SEO plugin, it likely handles canonical URLs similarly, so check their documentation.

For aggregation websites, like the Open Learner Patchbook, there’s also another solid option for not only setting the canonical URL, but for more quickly copying the original post as well. In these cases I love PressForward, a WordPress plugin from the Roy Rosenzweig Center for History and New Media which was designed with the education space in mind. The plugin allows one to quickly gather, organize, and republish content from other places on the web. It does so in a smart and ethical way and provides ample opportunity for providing appropriate citations as well as, for our purposes, setting the original URL as the canonical one. Because PressForward is such a powerful and diverse tool (as well as a built-in feed reader for your WordPress website), I’ll refer users to their excellent documentations.

Another useful reason I’ll mention for using rel-canonical mark up is that I’ve seen cases in which using it will allow other web standards-based tools like Hypothes.is to match pages for highlights and annotations. I suspect that if the Open Patchwork page did have the canonical link specified that any annotations made on it with Hypothes.is should mirror properly on the original as well (and vice-versa). 

I also suspect that there are some valuable uses of this sort of small metadata-based mark up within the Open Educational Resources (OER) space.

In short, when copying and reposting content from an original source online, it’s both courteous and useful to mark the copy as such by putting a tag onto the URL of the original to provide it with the full credit as the canonical source.

An annotation example for Hypothes.is using <blockquote> markup to maintain annotations on quoted passages

A test of some highlighting functionality with respect to rel-canonical mark up. I’m going to blockquote a passage of an original elsewhere on the web with a Hypothes.is annotation/highlight on it to see if the annotation will properly transclude it.

I’m using the following general markup to make this happen:

<blockquote><link rel="canonical" href="https://www.example.com/annotated_URL">
Text of the thing which was previously annotated.
</blockquote>

Let’s give it a whirl:

This summer marks the one-year anniversary of acquiring my domain through St. Norbert’s “Domain of One’s Own” program Knight Domains. I have learned a few important lessons over the past year about what having your own domain can mean.

SECURITY

The first issue that I never really thought about was the security and privacy on my domain. A few months after having my domain, I realized that if you searched my name, my domain was one of the first things that popped up. I was excited about this, but I soon realized that this meant everything I blogged about was very much in the open. This meant all of my pictures and also every person I have mentioned. I made the decision to only use first names when talking about others and the things we have done together. This way, I can protect their privacy in such an open space. With social media you have some control over who can see your post based on who “friends” or “follows you”; on a domain, this is not as much of a luxury. Originally, I thought my domain would be something I only shared with close friends and family, like a social media page, but understanding how many people have the opportunity to see it really shocked me and pushed me to think about the bigger picture of security and safety for me and those around me.

—Cassie Nooyens in What Having a Domain for a year has Taught Me

Unfortunately, however, I’m noticing that if I quote multiple sources this way (at least in my Chrome browser), only the last quoted block of text transcludes the Hypothes.is annotations. Based on prior experiments using rel-canonical mark up I’ve noticed this behavior, but I suspect it’s simply the fact that the rel-canonical appears on the page and matches one original. It would be awesome if such a rel-canonical link which was nested into any number of blockquote tags would cause the annotations from the originals

Perhaps Jon Udell and friends could shed some light on this and or make some tweaks so that blockquoting multiple sources within the same page could also allow the annotations on those quoted passages to be transcluded onto them?

Separately, I’m a tad worried that any annotations now made on my original could also be mistakenly pushed back to the quoted pages because of the matching rel-canonical without anything taking into account the nested portions of the page or the blockquoted pieces. I’ll make a test on a word or phrase like “security and privacy” to see if this is the case. We’ll all notice that of course this test fails by seeing the highlight on Cassie’s original. Oh well…

So the question becomes, is there a way within the annotation spec to allow us to write simple HTML documents that blockquote portions of other texts in such a way that we can bring over the annotations of those other texts (or allow annotating them on our original page and have them pushed back to the original) within the blockquoted portions, yet still not interfere with annotating our own original document? Ideally what other HTML tags could/should this work on? Further could this be common? Generally useful? Or simply just a unique edge case with wishful thinking made from this pet example? Perhaps there’s a better way to implement it than my just having thrown in the random link on a whim? Am I misguidedly attempting to do something that already exists?