Forget about blackout poetry, Google enables highlight poetry in your browser!

Kevin Marks literally and figuratively highlighted a bit of interesting found poetry on Google’s Ten things we know to be true article. (Click the link to see the highlight poetry on Google’s page for yourself.)

A screenshot appears below:

Screenshot of a Google Page with the words "Doing evil is a business. take advantage of all our users" disaggregated, but highlighted so as to reveal a message.
Found poetry:
“Doing evil
is a business
take advantage of
all our users”

Here’s a shortened URL for it that you can share with others: bit.ly/D-ntB-Evil

It’s a creative inverse of blackout poetry where instead of blacking out extraneous words, one can just highlight them instead. This comes courtesy of some new browser based functionality that Google announced earlier this week relating to some of their search and page snippets functionality.

You can find some code and descriptions for how to accomplish this in the WISC Scroll to Text Github repository.

What kind of poetry will you find online this week?

Read Google now highlights search results directly on webpages (The Verge)
It doesn’t seem to be available everywhere just yet.

SearchEngineLand notes that this could have an impact on the ad market, since a website’s visitors may be automatically scrolled down past its ads to the relevant content. The publication notes that sites may need to change the location of their ads in light of Google’s latest feature. 

And of course there will be crazy implications for the adtech space.

Annotated on June 04, 2020 at 09:30AM

Clicking the snippet still takes you to the webpage that it pulled the information from, but now the text from the snippet will be highlighted in yellow, and the browser will automatically scroll down to the section in question. 

This is a feature that’s been implemented in most browsers for a while as fragmentions.

Hypothes.is has supported this sort of functionality for a few years now as well.

I’m curious how these different implementations differ?

Annotated on June 04, 2020 at 09:36AM

and started testing the functionality on HTML pages last year 

According to Kevin Marks, this is the GitHub Repo they’ve been using for creating this work: https://github.com/WICG/scroll-to-text-fragment#:~:text=the%20worst&text=a%20Google&text=serious%20breakage&text=behavior
Annotated on June 04, 2020 at 12:08PM

THREE

In the intervening years since the blogosphere and the rise of corporate social media, enthusiasts, technologists and open source advocates have continued iterating on web standards and open protocols, so that now there are a handful of web standards that work across a variety of domains, servers, platforms, allowing educators to use smaller building blocks to build and enable the functionalities we need for building, maintaining, and most importantly owning our online courseware.

Liked Exploring Feed Discovery and Markup by David ShanskeDavid Shanske (david.shanske.com)
The issue of finding feeds to subscribe is a challenge that I have explored in my attempts to implement code in support of the Yarns Microsub Server. I want to publish feeds in a way that others can find them, not just users, but automated systems that present them to users. So, let’s start with t...
Great start on outlining the problem. I’ll need to come back to it again and look at some potential examples to form a better opinion. I’m curious what examples may be unearthed by some of your questions.
Replied to a tweet by Andy BellAndy Bell (Twitter)
I only used portions of it, but a few weeks back I bookmarked https://github.com/simevidas/web-dev-feeds

It’s got useful sections for specs, browsers, and tools. It also had @rachelandrew, @jensimmons, @adactio, and you, so it can’t be all bad.

Read Protocols, Not Platforms: A Technological Approach to Free Speech by Mike Masnick (knightcolumbia.org)
Altering the internet's economic and digital infrastructure to promote free speech

Meanwhile, politicians from the two major political parties have been hammering these companies, albeit for completely different reasons. Some have been complaining about how these platforms have potentially allowed for foreign interference in our elections.3 3. A Conversation with Mark Warner: Russia, Facebook and the Trump Campaign, Radio IQ|WVTF Music (Apr. 6, 2018), https://www.wvtf.org/post/conversation-mark-warner-russia-facebook-and-trump-campaign#stream/0 (statement of Sen. Mark Warner (D-Va.): “I first called out Facebook and some of the social media platforms in December of 2016. For the first six months, the companies just kind of blew off these allegations, but these proved to be true; that Russia used their social media platforms with fake accounts to spread false information, they paid for political advertising on their platforms. Facebook says those tactics are no longer allowed—that they’ve kicked this firm off their site, but I think they’ve got a lot of explaining to do.”). Others have complained about how they’ve been used to spread disinformation and propaganda.4 4. Nicholas Confessore & Matthew Rosenberg, Facebook Fallout Ruptures Democrats’ Longtime Alliance with Silicon Valley, N.Y. Times (Nov. 17, 2018), https://www.nytimes.com/2018/11/17/technology/facebook-democrats-congress.html (referencing statement by Sen. Jon Tester (D-Mont.): “Mr. Tester, the departing chief of the Senate Democrats’ campaign arm, looked at social media companies like Facebook and saw propaganda platforms that could cost his party the 2018 elections, according to two congressional aides. If Russian agents mounted a disinformation campaign like the one that had just helped elect Mr. Trump, he told Mr. Schumer, ‘we will lose every seat.’”). Some have charged that the platforms are just too powerful.5 5. Julia Carrie Wong, Up Big Tech: Elizabeth Warren Says Facebook Just Proved Her Point, The Guardian (Mar. 11, 2019), https://www.theguardian.com/us-news/2019/mar/11/elizabeth-warren-facebook-ads-break-up-big-tech (statement of Sen. Elizabeth Warren (D-Mass.)) (“Curious why I think FB has too much power? Let’s start with their ability to shut down a debate over whether FB has too much power. Thanks for restoring my posts. But I want a social media marketplace that isn’t dominated by a single censor. .”). Others have called attention to inappropriate account and content takedowns,6 6. Jessica Guynn, Ted Cruz Threatens to Regulate Facebook, Google and Twitter Over Charges of Anti-Conservative Bias, USA Today (Apr. 10, 2019), https://www.usatoday.com/story/news/2019/04/10/ted-cruz-threatens-regulate-facebook-twitter-over-alleged-bias/3423095002/ (statement of Sen. Ted Cruz (R-Tex.)) (“What makes the threat of political censorship so problematic is the lack of transparency, the invisibility, the ability for a handful of giant tech companies to decide if a particular speaker is disfavored.”). while some have argued that the attempts to moderate discriminate against certain political viewpoints.

Most of these problems can all fall under the subheading of the problems that result when social media platforms algorithmically push or accelerate content on their platforms. An individual with an extreme view can publish a piece of vile or disruptive content and because it’s inflammatory the silos promote it which provides even more eyeballs and the acceleration becomes a positive feedback loop. As a result the social silo benefits from engagement for advertising purposes, but the community and the commons are irreparably harmed.

If this one piece were removed, then the commons would be much healthier, fringe ideas and abuse that are abhorrent to most would be removed, and the broader democratic views of the “masses” (good or bad) would prevail. Without the algorithmic push of fringe ideas, that sort of content would be marginalized in the same way we want our inane content like this morning’s coffee or today’s lunch marginalized.

To analogize it, we’ve provided social media machine guns to the most vile and fringe members of our society and the social platforms are helping them drag the rest of us down.

If all ideas and content were provided the same linear, non-promotion we would all be much better off, and we wouldn’t have the need for as much human curation.

Annotated on December 11, 2019 at 11:13AM

That approach: build protocols, not platforms.

I can now see why @jack made his Twitter announcement this morning. If he opens up and can use that openness to suck up more data, then Twitter’s game could potentially be doing big data and higher end algorithmic work on even much larger sets of data to drive eyeballs.

I’ll have to think on how one would “capture” a market this way, but Twitter could be reasonably poised to pivot in this direction if they’re really game for going all-in on the idea.

It’s reasonably obvious that Twitter has dramatically slowed it’s growth and isn’t competing with some of it’s erstwhile peers. Thus they need to figure out how to turn a relatively large ship without losing value.

Annotated on December 11, 2019 at 11:20AM

It would allow end users to determine their own tolerances for different types of speech but make it much easier for most people to avoid the most problematic speech, without silencing anyone entirely or having the platforms themselves make the decisions about who is allowed to speak.

But platforms **are **making **huge **decisions about who is allowed to speak. While they’re generally allowing everyone to have a voice, they’re also very subtly privileging many voices over others. While they’re providing space for even the least among us to have a voice, they’re making far too many of the worst and most powerful among us logarithmic-ally louder.

It’s not broadly obvious, but their algorithms are plainly handing massive megaphones to people who society broadly thinks shouldn’t have a voice at all. These megaphones come in the algorithmic amplification of fringe ideas which accelerate them into the broader public discourse toward the aim of these platforms getting more engagement and therefore more eyeballs for their advertising and surveillance capitalism ends.

The issue we ought to be looking at is the dynamic range between people and the messages they’re able to send through social platforms.

We could also analogize this to the voting situation in the United States. When we disadvantage the poor, disabled, differently abled, or marginalized people from voting while simultaneously giving the uber-rich outsized influence because of what they’re able to buy, we’re imposing the same sorts of problems. Social media is just able to do this at an even larger scale and magnify the effects to make their harms more obvious.

If I follow 5,000 people on social media and one of them is a racist-policy-supporting, white nationalist president, those messages will get drowned out because I can only consume so much content. But when the algorithm consistently pushes that content to the top of my feed and attention, it is only going to accelerate it and create more harm. If I get a linear presentation of the content, then I’d have to actively search that content out for it to cause me that sort of harm.

Annotated on December 11, 2019 at 11:39AM

Moving back to a focus on protocols over platforms can solve many of these problems.

This may also only be the case if large corporations are forced to open up and support those protocols. If my independent website can’t interact freely and openly with something like Twitter on a level playing field, then it really does no good.

Annotated on December 11, 2019 at 11:42AM

And other recent developments suggest that doing so could overcome many of the earlier pitfalls of protocol-based systems, potentially creating the best of all words: useful internet services, with competition driving innovation, not controlled solely by giant corporations, but financially sustainable, providing end users with more control over their own data and privacy—and providing mis- and disinformation far fewer opportunities to wreak havoc.

Some of the issue with this then becomes: “Who exactly creates these standards?” We already have issues with mega-corporations like Google wielding out sized influence in the ability to create new standards like Schema.org or AMP.

Who is to say they don’t tacitly design their standards to directly (and only) benefit themselves?

Annotated on December 11, 2019 at 11:47AM

Read Twitter's Project Bluesky by Ben WerdmüllerBen Werdmüller (Ben Werdmüller)
This morning, Jack Dorsey announced that Twitter would be funding an independent group that would develop an open standard for decentralized social networking, with the expectation that the company would use it. Twitter is funding a small independent team of up to five open source architects, engine...
An intriguing take on this (bitcoin pieces aside), though I wonder what sorts of larger scale pieces Twitter might really need and how they’d need to structure things to mitigate those issues. In the early days, their open API gave them cover to allow others to build on their platform while they worked at scaling and keeping the servers up. Once those were stable, they pulled the rug out from under everyone. If things open up and the service is more broadly distributed, are those scale pieces still so necessary (except for their scraping purposes)?

Domains, power, the commons, credit, SEO, and some code implications

How to provide better credit on the web using the standard rel=“canonical” by looking at an example from the Open Learner Patchbook

A couple of weeks back, I noticed and began following Cassie Nooyen when I became aware of her at the Domains 2019 conference which I followed fairly closely online.

She was a presenter and wrote a couple of nice follow up pieces about her experiences on her website. I bookmarked one of them to read later, and then two days later I came across this tweet by Terry Green, who had also apparently noticed her post:

But I was surprised to see the link in the tweet points to a different post in the Open Learner Patchbook, which is an interesting site in and of itself.

This means that there are now at least two full copies of Cassie’s post online:

While I didn’t see a Creative Commons notice on Cassie’s original or any mention of permissions or even a link to the source of the original on the copy on the Open Patchbook, I don’t doubt that Terry asked Cassie for permission to post a copy of her work on his site. I’ll also suspect that it may have been the case that Cassie might not have wanted any attention drawn to herself or her post on her site and may have eschewed a link to it. I will note that the Open Patchbook did have a link to her Twitter presence as a means of credit. (I’ll still maintain that people should be preferring links to their own domain over Twitter for credits like these–take back your power!)

Even with these crediting caveats aside, there’s a subtle technical piece hiding here relating to search engines and search engine optimization that many in the Domain of One’s Own space may not realize exists, or if they do they may not be sure how to fix. This technical subtlety is that search engines attempt to assign proper credit too. As a result there’s a very high chance that Open Patchbook could rank higher in search for Cassie’s own post than Cassie’s original. As researchers and educators we’d obviously vastly prefer the original to get the credit. So what’s going on here?

Search engines use a web standard known as rel=“canonical”, a microformat which is most often found in the HTML <header> of a web page. If we view the current source of the copy on the Open Learner Patchbook, we’ll see the following:

<link rel="canonical" href="http://openlearnerpatchbook.org/technology/patch-twenty-five-my-domain-my-place-to-grow/" />

According to the Microformats wiki:

By adding rel=“canonical” to a hyperlink, a page indicates that the destination of that hyperlink should be considered the preferred or definitive version of the current page. This helps search engines avoid duplicate content, and is useful for deciding how to link to a page when citing it.

In the case of our example of Cassie’s post, search engines will treat the two pages as completely separate, but will suspect that one is a duplicate of the other. This could have dramatic consequences for one or the other sites in which search engines will choose one to prefer over the other, and, in some cases, search engines may penalize one site for having duplicate content and not stating that fact (in their metadata). Typically this would have more drastic and averse consequences for Cassie’s original in comparison with an institutional site. 

How do we fix the injustice of this metadata? 

There are a variety of ways, but I’ll focus on several in the WordPress space. 

WordPress core has built-in functionality that should set the permalink for a particular page as the canonical one. This is why the Open Patchbook page displays the incorrect canonical link. Since most people are likely to already have an SEO related plugin installed on their site and almost all of them have this capability, this is likely the quickest and easiest method for being able to change canonical links for pages and posts. Two popular choices for this are Yoast and All in One SEO which have simple settings for inputting and saving alternate canonical URLs. Yoast documents the steps pretty well, so I’ll provide an example using All in One SEO:

  • If not done already, click the checkbox for canonical URLs in the “General Settings” section for the plugin generally found at /wp-admin/admin.php?page=all-in-one-seo-pack%2Faioseop_class.php.
  • For the post (or page) in question, within the All in One SEO metabox in the admin interface (pictured) put the full URL of the original posts’ location.
  • (Re-)publish the post.

Screenshot of the AIOSEO metabox with the field for the Canonical URL outlined in red

If you’re using another SEO plugin, it likely handles canonical URLs similarly, so check their documentation.

For aggregation websites, like the Open Learner Patchbook, there’s also another solid option for not only setting the canonical URL, but for more quickly copying the original post as well. In these cases I love PressForward, a WordPress plugin from the Roy Rosenzweig Center for History and New Media which was designed with the education space in mind. The plugin allows one to quickly gather, organize, and republish content from other places on the web. It does so in a smart and ethical way and provides ample opportunity for providing appropriate citations as well as, for our purposes, setting the original URL as the canonical one. Because PressForward is such a powerful and diverse tool (as well as a built-in feed reader for your WordPress website), I’ll refer users to their excellent documentations.

Another useful reason I’ll mention for using rel-canonical mark up is that I’ve seen cases in which using it will allow other web standards-based tools like Hypothes.is to match pages for highlights and annotations. I suspect that if the Open Patchwork page did have the canonical link specified that any annotations made on it with Hypothes.is should mirror properly on the original as well (and vice-versa). 

I also suspect that there are some valuable uses of this sort of small metadata-based mark up within the Open Educational Resources (OER) space.

In short, when copying and reposting content from an original source online, it’s both courteous and useful to mark the copy as such by putting a tag onto the URL of the original to provide it with the full credit as the canonical source.

An annotation example for Hypothes.is using <blockquote> markup to maintain annotations on quoted passages

A test of some highlighting functionality with respect to rel-canonical mark up. I’m going to blockquote a passage of an original elsewhere on the web with a Hypothes.is annotation/highlight on it to see if the annotation will properly transclude it.

I’m using the following general markup to make this happen:

<blockquote><link rel="canonical" href="https://www.example.com/annotated_URL">
Text of the thing which was previously annotated.
</blockquote>

Let’s give it a whirl:

This summer marks the one-year anniversary of acquiring my domain through St. Norbert’s “Domain of One’s Own” program Knight Domains. I have learned a few important lessons over the past year about what having your own domain can mean.

SECURITY

The first issue that I never really thought about was the security and privacy on my domain. A few months after having my domain, I realized that if you searched my name, my domain was one of the first things that popped up. I was excited about this, but I soon realized that this meant everything I blogged about was very much in the open. This meant all of my pictures and also every person I have mentioned. I made the decision to only use first names when talking about others and the things we have done together. This way, I can protect their privacy in such an open space. With social media you have some control over who can see your post based on who “friends” or “follows you”; on a domain, this is not as much of a luxury. Originally, I thought my domain would be something I only shared with close friends and family, like a social media page, but understanding how many people have the opportunity to see it really shocked me and pushed me to think about the bigger picture of security and safety for me and those around me.

—Cassie Nooyens in What Having a Domain for a year has Taught Me

Unfortunately, however, I’m noticing that if I quote multiple sources this way (at least in my Chrome browser), only the last quoted block of text transcludes the Hypothes.is annotations. Based on prior experiments using rel-canonical mark up I’ve noticed this behavior, but I suspect it’s simply the fact that the rel-canonical appears on the page and matches one original. It would be awesome if such a rel-canonical link which was nested into any number of blockquote tags would cause the annotations from the originals

Perhaps Jon Udell and friends could shed some light on this and or make some tweaks so that blockquoting multiple sources within the same page could also allow the annotations on those quoted passages to be transcluded onto them?

Separately, I’m a tad worried that any annotations now made on my original could also be mistakenly pushed back to the quoted pages because of the matching rel-canonical without anything taking into account the nested portions of the page or the blockquoted pieces. I’ll make a test on a word or phrase like “security and privacy” to see if this is the case. We’ll all notice that of course this test fails by seeing the highlight on Cassie’s original. Oh well…

So the question becomes, is there a way within the annotation spec to allow us to write simple HTML documents that blockquote portions of other texts in such a way that we can bring over the annotations of those other texts (or allow annotating them on our original page and have them pushed back to the original) within the blockquoted portions, yet still not interfere with annotating our own original document? Ideally what other HTML tags could/should this work on? Further could this be common? Generally useful? Or simply just a unique edge case with wishful thinking made from this pet example? Perhaps there’s a better way to implement it than my just having thrown in the random link on a whim? Am I misguidedly attempting to do something that already exists?

👓 Toast | Adactio: Journal

Read a post by Jeremy KeithJeremy Keith (adactio.com)
Shockwaves rippled across the web standards community recently when it appeared that Google Chrome was unilaterally implementing a new element called toast. It turns out that’s not the case, but the confusion is understandable. First off, this all kicked off with the announcement of “intent to i...

<details> tags, Fragment URLs, and the HTML spec

A few weeks ago I read a post by Jamie Tanna on Using <details> tags for HTML-only UI toggles.

I thought it was a pretty slick use of HTML to create some really simple and broadly useful UI.

Then earlier today I noticed that Jeremy Keith has recently switched to using this on his personal site in the comments section to provide toggles for his various webmention types including shares, likes, bookmarks, etc. But this is where I’m noticing a quirky UI issue that isn’t as web friendly as it could be. Jeremy and others (including myself own my own site) will often provide ID tags so that one can give permalinks to the individual comments using fragments of the form:

https://adactio.com/journal/15050#comment70567 or
https://adactio.com/journal/15050#comment71896

But here’s where the UI problem lies. The first fragment URL only resolves to the page instead of the specific bookmark hiding behind a  <details> tag whereas the second fragment URL resolves to the page and automatically scrolls down to a comment by DominoPivot. It does this in both Chrome and FireFox and I would presume operates similarly in other browsers.

I suspect that most users would expect/prefer that the fragment URL should automatically expand the <details> tag and scroll down the page to that ID  or fragment as well.

Perhaps Jamie, Jeremy, Tantek, Kevin or others may have some trickery to make this happen? Otherwise, do we need to start digging into specs and browsers to get them to better support this sort of fragment related functionality?  Perhaps it’s this section of the HTML spec, the URL of which has such a fragment and therefor scrolls down properly if you click on it? (Meta pun intended.)

Watched For Patients, by Patients: Pioneering a New Approach in Med-Tech Design by  Innovate Pasadena: Friday Coffee Meetup Innovate Pasadena: Friday Coffee Meetup from YouTube

I was ten years into a career as a user experience designer making new digital products when diabetes blew my family's life apart. The complexity and relentlessness of the burden of care that came with my youngest daughter's diagnosis at 1.5 years old, were overwhelming. I learned that people with diabetes are always 10 minutes of inattention away from a coma. Run your blood sugar too low and risk brain injury or death. Run too high and you do cumulative damage to your organs, nerves and eyes. And as a designer and hardware hacker I couldn't accept the limitations and poor User Experience I was seeing in all the tools we were given to deal with it.

Then I discovered Nightscout (a way to monitor my daughter's blood sugar in real time from anywhere in the world) and Loop (a DIY open sourced, artificial pancreas system that checks blood sugar and adjusts insulin dosing every five minutes 24/7) and the #WeAreNotWaiting community that produced them. For the first time I saw the kinds of tools I needed and true power of solutions that come from the people living with the problem. When I learned about the Tidepool's project to take Loop through FDA approval and bring it to anyone who wants to use it to give the same freedom and relief that we've experienced from it, I had to get involved. Now we are taking an open source software through regulatory approval and using real-life user data from the DIY community for our clinical trial in a process that is turning heads in the industry. We'll get into the many ways this story demonstrates ways that user driven design, open source models and a counterculturally collaborative approach with regulators are shifting the incentives and changing the landscape toward one more favorable to innovation.

Here’s the video I mentioned yesterday. Those deeply enmeshed in the IndieWeb movement and many of its subtleties will get a ringing sense of déjà vu as they watch it and realize there’s a lot of overlap with how (and why) Matt Lumpkin is working to help those with type 1 diabetes and the IndieWeb. Perhaps there are some lessons to be learned here?