🎧 Triangulation 413 David Weinberger: Everyday Chaos | TWiT.TV

Listened to Triangulation 413 David Weinberger: Everyday Chaos from TWiT.tv

Mikah Sargent speaks with David Weinberger, author of Everyday Chaos: Technology, Complexity, and How We’re Thriving in a New World of Possibility about how AI, big data, and the internet are all revealing that the world is vastly more complex and unpredictable than we've allowed ourselves to see and how we're getting acculturated to these machines based on chaos.

Interesting discussion of systems with built in openness or flexibility as a feature. They highlight Slack which has a core product, but allows individual users and companies to add custom pieces to it to use in the way they want. This provides a tremendous amount of addition value that Slack would never have known or been able to build otherwise. These sorts of products or platforms have the ability not only to create their inherent links, but add value by being able to flexibly create additional links outside of themselves or let external pieces create links to them.

Twitter started out like this in some sense, but ultimately closed itself off–likely to its own detriment.

Social Reading User Interface for Discovery

I read quite a bit of material online. I save “bookmarks” of all of it on my personal website, sometimes with some additional notes and sometimes even with more explicit annotations. One of the things I feel like I’m missing from my browser, browser extensions, and/or social feed reader is a social layer overlay that could indicate that people in my social network(s) have read or interacted directly with that page (presuming they make that data openly available.)

One of the things I’d love to see pop up out of the discovery explorations of the IndieWeb or some of the social readers in the space is the ability to uncover some of this social reading information. Toward this end I thought I’d collect some user interface examples of things that border on this sort of data to make the brainstorming and building of such functionality easier in the near future.

If I’m missing useful examples or you’d like to add additional thoughts, please feel free to comment below.

Examples of social reading user interface for discovery

Google

I don’t often search for reading material directly, but Google has a related bit of UI indicating that I’ve visited a website before. I sort of wish it had the ability to surface the fact that I’ve previously read or bookmarked an article or provided data about people in my social network who’ve done similarly within the browser interface for a particular article (without the search.) If a browser could use data from my personal website in the background to indicate that I’ve interacted with it before (and provide those links, notes, etc.), that would be awesome!

Screen capture for Google search of Kevin Marks with a highlight indicating that I've visited this page in the recent past
Screen capture for Google search of Kevin Marks with a highlight indicating that I’ve visited his page several times in the past. Given the March 2017 date, it’s obvious that the screen shot is from a browser and account I don’t use often.

I’ll note here that because of the way I bookmark or post reads on my own website, my site often ranks reasonably well for those things.

On a search for an article by Aaron Parecki, my own post indicating that I’ve read it in the past ranks second right under the original.

In some cases, others who are posting about those things (reading, commenting, bookmarking, liking, etc.) in my social network also show up in these sorts of searches. How cool would it be to have a social reader that could display this sort of social data based on people it knows I’m following

A search for a great article by Matthias Ott shows that both I and several of my friends (indicated by red arrows superimposed on the search query) have read, bookmarked, or commented on it too.

Hypothes.is

Hypothes.is is a great open source highlighting, annotation, and bookmarking tool with a browser extension that shows an indicator of how many annotations  appear on the page. In my experience, higher numbers often indicate some interesting and engaging material. I do wish that it had a follower/following model that could indicate my social sphere has annotated a page. I also wouldn’t mind if their extension “bug” in the browser bar had another indicator in the other corner to indicate that I had previously annotated a page!

Screen capture of Vannevar Bush’s article As We May Think in The Atlantic with a Hypothes.is browser extension bug indicating that there are 329 annotations on the page.

Reading.am

It doesn’t do it until after-the-fact, but Reading.am has a pop up overlay through its browser extension. It adds me to the list of people who’ve read an article, but it also indicates others in the network and those I’m following who have also read it (sometimes along with annotations about their thoughts).

What I wouldn’t give to see that pop up in the corner before I’ve read it!

Reading.am’s social layer creates a yellow colored pop up list in the upper right of the browser indicating who else has read the article as well as showing some of their notes on it. Unfortunately it doesn’t pop up until after you’ve marked the item as read.

Nuzzel

Nuzzel is one of my favorite tools. I input my Twitter account as well as some custom lists and it surfaces articles that people in my Twitter network have been tweeting about. As a result, it’s one of the best discovery tools out there for solid longer form content. Rarely do I read content coming out of Nuzzel and feel robbed. Because of how it works, it’s automatically showing those people in my network and some of what they’ve thought about it. I love this contextualization.

Nuzzel’s interface shows the title and an excerpt of an article and also includes the avatars, names, network, and commentary of one’s friends that interacted with the piece. In this example it’s relatively obvious that one reader influenced several others who retweeted it because of her.

Goodreads

Naturally sites for much longer form content will use social network data about interest, reviews, and interaction to a much greater extent since there is a larger investment of time involved. Thus social signaling can be more valuable in this context. A great example here is of Goodreads which shows me those in my network who are interested in reading a particular book or who have written reviews or given ratings.

A slightly excerpted/modified screen capture of the Goodreads page for Melanie Mitchell’s book Complexity that indicates several in my social network are also interested in reading it.

Are there other examples I’m missing? Are you aware of similar discovery related tools for reading that leverage social network data?

Watched How the medium shapes the message by Cesar Hidalgo from TEDxYouth@BeaconStreet | YouTube

How communication technologies shape our collective memory.

César A. Hidalgo is an assistant professor at the MIT Media Lab. Hidalgo’s work focuses on improving the understanding of systems by using and developing concepts of complexity, evolution, and network science; his goal is to help improve understanding of the evolution of prosperity in order to help develop industrial policies that can help countries raise the living standards of their citizens. His areas of application include economic development, systems biology, and social systems. Hidalgo is also a graphic-art enthusiast and has published and exhibited artwork that uses data collected originally for scientific purposes.

This talk was given at a TEDx event using the TED conference format but independently organized by a local community. Learn more at http://ted.com/tedx

Another Hypothes.is test. This time let’s throw a via.hypothes.is-based link (which seems to be the only way to and shove it all in) into an iframe! What will be orphaned? What will be native? Will annotating the iframed version push the annotations back to the original, will they show up as orphaned, or will they show up on the parent page of the iframe, or all of the above?

I also wonder if we could use fragments to target specific portions of pages like this for blockquoting/highlighting and still manage to get the full frame and Hypothes.is interface? Let’s give that a go too shall we? Would it be apropos to do a fragment quote from Fragmentions for Better Highlighting and Direct References on the Web?

Shazam!! That worked rather well didn’t it? And we can customize the size of the iframe container to catch all of the quote rather well on desktop at least. Sadly, most people’s sites don’t support fragmentions or have fragmentioner code running. It might also look like our fragment is causing my main page to scroll down to the portion of the highlighted text in the iframe. Wonder how to get around that bit of silliness?

And now our test is done.

Domains, power, the commons, credit, SEO, and some code implications

How to provide better credit on the web using the standard rel=“canonical” by looking at an example from the Open Learner Patchbook

A couple of weeks back, I noticed and began following Cassie Nooyen when I became aware of her at the Domains 2019 conference which I followed fairly closely online.

She was a presenter and wrote a couple of nice follow up pieces about her experiences on her website. I bookmarked one of them to read later, and then two days later I came across this tweet by Terry Green, who had also apparently noticed her post:

But I was surprised to see the link in the tweet points to a different post in the Open Learner Patchbook, which is an interesting site in and of itself.

This means that there are now at least two full copies of Cassie’s post online:

While I didn’t see a Creative Commons notice on Cassie’s original or any mention of permissions or even a link to the source of the original on the copy on the Open Patchbook, I don’t doubt that Terry asked Cassie for permission to post a copy of her work on his site. I’ll also suspect that it may have been the case that Cassie might not have wanted any attention drawn to herself or her post on her site and may have eschewed a link to it. I will note that the Open Patchbook did have a link to her Twitter presence as a means of credit. (I’ll still maintain that people should be preferring links to their own domain over Twitter for credits like these–take back your power!)

Even with these crediting caveats aside, there’s a subtle technical piece hiding here relating to search engines and search engine optimization that many in the Domain of One’s Own space may not realize exists, or if they do they may not be sure how to fix. This technical subtlety is that search engines attempt to assign proper credit too. As a result there’s a very high chance that Open Patchbook could rank higher in search for Cassie’s own post than Cassie’s original. As researchers and educators we’d obviously vastly prefer the original to get the credit. So what’s going on here?

Search engines use a web standard known as rel=“canonical”, a microformat which is most often found in the HTML <header> of a web page. If we view the current source of the copy on the Open Learner Patchbook, we’ll see the following:

<link rel="canonical" href="http://openlearnerpatchbook.org/technology/patch-twenty-five-my-domain-my-place-to-grow/" />

According to the Microformats wiki:

By adding rel=“canonical” to a hyperlink, a page indicates that the destination of that hyperlink should be considered the preferred or definitive version of the current page. This helps search engines avoid duplicate content, and is useful for deciding how to link to a page when citing it.

In the case of our example of Cassie’s post, search engines will treat the two pages as completely separate, but will suspect that one is a duplicate of the other. This could have dramatic consequences for one or the other sites in which search engines will choose one to prefer over the other, and, in some cases, search engines may penalize one site for having duplicate content and not stating that fact (in their metadata). Typically this would have more drastic and averse consequences for Cassie’s original in comparison with an institutional site. 

How do we fix the injustice of this metadata? 

There are a variety of ways, but I’ll focus on several in the WordPress space. 

WordPress core has built-in functionality that should set the permalink for a particular page as the canonical one. This is why the Open Patchbook page displays the incorrect canonical link. Since most people are likely to already have an SEO related plugin installed on their site and almost all of them have this capability, this is likely the quickest and easiest method for being able to change canonical links for pages and posts. Two popular choices for this are Yoast and All in One SEO which have simple settings for inputting and saving alternate canonical URLs. Yoast documents the steps pretty well, so I’ll provide an example using All in One SEO:

  • If not done already, click the checkbox for canonical URLs in the “General Settings” section for the plugin generally found at /wp-admin/admin.php?page=all-in-one-seo-pack%2Faioseop_class.php.
  • For the post (or page) in question, within the All in One SEO metabox in the admin interface (pictured) put the full URL of the original posts’ location.
  • (Re-)publish the post.

Screenshot of the AIOSEO metabox with the field for the Canonical URL outlined in red

If you’re using another SEO plugin, it likely handles canonical URLs similarly, so check their documentation.

For aggregation websites, like the Open Learner Patchbook, there’s also another solid option for not only setting the canonical URL, but for more quickly copying the original post as well. In these cases I love PressForward, a WordPress plugin from the Roy Rosenzweig Center for History and New Media which was designed with the education space in mind. The plugin allows one to quickly gather, organize, and republish content from other places on the web. It does so in a smart and ethical way and provides ample opportunity for providing appropriate citations as well as, for our purposes, setting the original URL as the canonical one. Because PressForward is such a powerful and diverse tool (as well as a built-in feed reader for your WordPress website), I’ll refer users to their excellent documentations.

Another useful reason I’ll mention for using rel-canonical mark up is that I’ve seen cases in which using it will allow other web standards-based tools like Hypothes.is to match pages for highlights and annotations. I suspect that if the Open Patchwork page did have the canonical link specified that any annotations made on it with Hypothes.is should mirror properly on the original as well (and vice-versa). 

I also suspect that there are some valuable uses of this sort of small metadata-based mark up within the Open Educational Resources (OER) space.

In short, when copying and reposting content from an original source online, it’s both courteous and useful to mark the copy as such by putting a tag onto the URL of the original to provide it with the full credit as the canonical source.

An annotation example for Hypothes.is using <blockquote> markup to maintain annotations on quoted passages

A test of some highlighting functionality with respect to rel-canonical mark up. I’m going to blockquote a passage of an original elsewhere on the web with a Hypothes.is annotation/highlight on it to see if the annotation will properly transclude it.

I’m using the following general markup to make this happen:

<blockquote><link rel="canonical" href="https://www.example.com/annotated_URL">
Text of the thing which was previously annotated.
</blockquote>

Let’s give it a whirl:

This summer marks the one-year anniversary of acquiring my domain through St. Norbert’s “Domain of One’s Own” program Knight Domains. I have learned a few important lessons over the past year about what having your own domain can mean.

SECURITY

The first issue that I never really thought about was the security and privacy on my domain. A few months after having my domain, I realized that if you searched my name, my domain was one of the first things that popped up. I was excited about this, but I soon realized that this meant everything I blogged about was very much in the open. This meant all of my pictures and also every person I have mentioned. I made the decision to only use first names when talking about others and the things we have done together. This way, I can protect their privacy in such an open space. With social media you have some control over who can see your post based on who “friends” or “follows you”; on a domain, this is not as much of a luxury. Originally, I thought my domain would be something I only shared with close friends and family, like a social media page, but understanding how many people have the opportunity to see it really shocked me and pushed me to think about the bigger picture of security and safety for me and those around me.

—Cassie Nooyens in What Having a Domain for a year has Taught Me

Unfortunately, however, I’m noticing that if I quote multiple sources this way (at least in my Chrome browser), only the last quoted block of text transcludes the Hypothes.is annotations. Based on prior experiments using rel-canonical mark up I’ve noticed this behavior, but I suspect it’s simply the fact that the rel-canonical appears on the page and matches one original. It would be awesome if such a rel-canonical link which was nested into any number of blockquote tags would cause the annotations from the originals

Perhaps Jon Udell and friends could shed some light on this and or make some tweaks so that blockquoting multiple sources within the same page could also allow the annotations on those quoted passages to be transcluded onto them?

Separately, I’m a tad worried that any annotations now made on my original could also be mistakenly pushed back to the quoted pages because of the matching rel-canonical without anything taking into account the nested portions of the page or the blockquoted pieces. I’ll make a test on a word or phrase like “security and privacy” to see if this is the case. We’ll all notice that of course this test fails by seeing the highlight on Cassie’s original. Oh well…

So the question becomes, is there a way within the annotation spec to allow us to write simple HTML documents that blockquote portions of other texts in such a way that we can bring over the annotations of those other texts (or allow annotating them on our original page and have them pushed back to the original) within the blockquoted portions, yet still not interfere with annotating our own original document? Ideally what other HTML tags could/should this work on? Further could this be common? Generally useful? Or simply just a unique edge case with wishful thinking made from this pet example? Perhaps there’s a better way to implement it than my just having thrown in the random link on a whim? Am I misguidedly attempting to do something that already exists?

Domains 2019 Reflections from Afar

My OPML Domains Project

Not being able to attend Domains 2019 in person, I was bound and determined to attend as much of it as I could manage remotely. A lot of this revolved around following the hashtag for the conference, watching the Virtually Connecting sessions, interacting online, and starting to watch the archived videos after-the-fact. Even with all of this, for a while I had been meaning to flesh out my ability to follow the domains (aka websites) of other attendees and people in the space. Currently the easiest way (for me) to do this is via RSS with a feed reader, so I began collecting feeds of those from the Twitter list of Domains ’17 and Domains ’19 attendees as well as others in the education-related space who tweet about A Domain of One’s Own or IndieWeb. In some sense, I would be doing some additional aggregation work on expanding my blogroll, or, as I call it now, my following page since it’s much too large and diverse to fit into a sidebar on my website.

For some brief background, my following page is built on some old functionality in WordPress core that has since been hidden. I’m using the old Links Manager for collecting links and feeds of people, projects, groups, and institutions. This link manager creates standard OPML files, which WordPress can break up by categories, that can easily be imported/exported into most standard feed readers. Even better, some feed readers like Inoreader, support OPML subscriptions, so one could subscribe to my OPML file, and any time I update it in the future with new subscriptions, your feed reader would automatically update to follow those as well. I use this functionality in my own Inoreader account, so that any new subscriptions I add to my own site are simply synced to my feed reader without needing to be separately added or updated.

The best part of creating such a list and publishing it in a standard format is that you, dear reader, don’t need to spend the several hours I did to find, curate, and compile the list to recreate it for yourself, but you can now download it, modify it if necessary, and have a copy for yourself in just a few minutes. (Toward that end, I’m also happy to update it or make additions if others think it’s missing anyone interesting in the space–feedback, questions, and comments are heartily encouraged.) You can see a human-readable version of the list at this link, or find the computer parse-able/feed reader subscribe-able link here.

To make it explicit, I’ll also note that these lists also help me to keep up with people and changes in the timeframe between conferences.

Anecdotal Domains observations

In executing this OPML project I noticed some interesting things about the Domains community at large (or at least those who are avid enough to travel and attend in person or actively engage online). I’ll lay these out below. Perhaps at a future date, I’ll do a more explicit capture of the data with some analysis.

The largest majority of sites I came across were, unsurprisingly, WordPress-based, which made it much easier to find RSS feeds to read/consume material. I could simply take a domain name and add /feed/ to the end of the URL, and voilà, a relatively quick follow!

There are a lot of people whose sites didn’t have obvious links to their feeds. To me this is a desperate tragedy for the open web. We’re already behind the eight ball compared to social media and corporate controlled sites, why make it harder for people to read/consume our content from our own domains? And as if to add insult to injury, the places on one’s website where an RSS feed link/icon would typically live were instead populated by links to corporate social media like Facebook, Twitter, and Instagram. In a few cases I also saw legacy links to Google+ which ended service and disappeared from the web along with a tremendous number of online identities and personal data on April 2, 2019. (Here’s a reminder to remove those if you’ve forgotten.) For those who are also facing this problem, there’s a fantastic service called SubToMe that has a universal follow button that can be installed or which works well with a browser bookmarklet and a wide variety of feed readers.

I was thrilled to see a few people were using interesting alternate content management systems/site generators like WithKnown and Grav. There were  also several people who had branched out to static site generators (sites without a database). This sort of plurality is a great thing for the community and competition in the space for sites, design, user experience, etc. is awesome. It’s thrilling to see people in the Domains space taking advantage of alternate options, experimenting with them, and using them in the wild.

I’ll note that I did see a few poor souls who were using Wix. I know there was at least one warning about Wix at the conference, but in case it wasn’t stated explicitly, Wix does not support exporting data, which makes any potential future migration of sites difficult. Definitely don’t use it for any extended writing, as cutting and pasting more than a few simple static pages becomes onerous. To make matters worse, Wix doesn’t offer any sort of back up service, so if they chose to shut your site off for any reason, you’d be completely out of luck. No back up + no export = I couldn’t recommend using.

If your account or any of your services are cancelled, it may result in loss of content and data. You are responsible to back up your data and materials. —Wix Terms of Use

I also noticed a few people had generic domain names that they didn’t really own (and not even in the sense of rental ownership). Here I’m talking about domain names of the form username.domainsproject.com. While I’m glad that they have a domain that they can use and generally control, it’s not one that they can truly exert full ownership over. (They just can’t pick it up and take it with them.) Even if they could export/import their data to another service or even a different content management system, all their old links would immediately disappear from the web. In the case of students, while it’s nice that their school may provide this space, it is more problematic for data portability and longevity on the web that they’ll eventually lose that institutional domain name when they graduate. On the other hand, if you have something like yourname.com as your digital home, you can export/import, change content management services, hosting companies, etc. and all your content will still resolve and you’ll be imminently more find-able by your friends and colleagues. This choice is essentially the internet equivalent of changing cellular providers from Sprint to AT&T but taking your phone number with you–you may change providers, but people will still know where to find you without being any the wiser about your service provider changes. I think that for allowing students and faculty the ability to more easily move their content and their sites, Domains projects should require individual custom domains.

If you don’t own/control your physical domain name, you’re prone to lose a lot of value built up in your permalinks. I’m also reminded of here of the situation encountered by faculty who move from one university to another. (Congratulations by the way to Martha Burtis on the pending move to Plymouth State. You’ll notice she won’t face this problem.)  There’s also the situation of Matthew Green, a security researcher at Johns Hopkins whose institutional website was taken down by his university when the National Security Agency flagged an apparent issue. Fortunately in his case, he had his own separate domain name and content on an external server and his institutional account was just a mirrored copy of his own domain.

If you’ve got it, flaunt it.
—Mel Brooks from The Producers (1968), obviously with the it being a referent to A Domain of One’s Own.

Also during my project, I noted that quite a lot of people don’t list their own personal/professional domains within their Twitter or other social media profiles. This seems a glaring omission particularly for at least one whose Twitter bio creatively and proactively claims that they’re an avid proponent of A Domain of One’s Own.

And finally there were a small–but still reasonable–number of people within the community for whom I couldn’t find their domain at all! A small number assuredly are new to the space or exploring it, and so I’d give a pass, but I was honestly shocked that some just didn’t.

(Caveat: I’ll freely admit that the value of Domains is that one has ultimate control including the right not to have or use one or even to have a private, hidden, and completely locked down one, just the way that Dalton chose not to walk in the conformity scene in The Dead Poet’s Society. But even with this in mind, how can we ethically recommend this pathway to students, friends, and colleagues if we’re not willing to participate ourselves?)

Too much Twitter & a challenge for the next Domains Conference

One of the things that shocked me most at a working conference about the idea of A Domain of One’s Own within education where there was more than significant time given to the ideas of privacy, tracking, and surveillance, was the extent that nearly everyone present gave up their identity, authority, and digital autonomy to Twitter, a company which actively represents almost every version of the poor ethics, surveillance, tracking, and design choices we all abhor within the edtech space.

Why weren’t people proactively using their own domains to communicate instead? Why weren’t their notes, observations, highlights, bookmarks, likes, reposts, etc. posted to their own websites? Isn’t that part of what we’re in all this for?!

One of the shining examples from Domains 2019 that I caught as it was occurring was John Stewart’s site where he was aggregating talk titles, abstracts, notes, and other details relevant to himself and his practice. He then published them in the open and syndicated the copies to Twitter where the rest of the conversation seemed to be happening. His living notebook– or digital commmonplace book if you will–is of immense value not only to him, but to all who are able to access it. But you may ask, “Chris, didn’t you notice them on Twitter first?” In fact, I did not! I caught them because I was following the live feed of some of the researchers, educators, and technologists I follow in my feed reader using the OPML files mentioned above. I would submit, especially as a remote participant/follower of the conversation, that his individual posts were worth 50 or more individual tweets. Just the additional context they contained made them proverbially worth their weight in gold.

Perhaps for the next conference, we might build a planet or site that could aggregate all the feeds of people’s domains using their categories/tags or other means to create our own version of the Twitter stream? Alternately, by that time, I suspect that work on some of the new IndieWeb readers will have solidified to allow people to read feeds and interact with that content directly and immediately in much the way Twitter works now except that all the interaction will occur on our own domains.

As educators, one of the most valuable things we can and should do is model appropriate behavior for students. I think it’s high time that when attending a professional conference about A Domain of One’s Own that we all ought to be actively doing it using our own domains. Maybe we could even quit putting our Twitter handles on our slides, and just put our domain names on them instead?

Of course, I wouldn’t and couldn’t suggest or even ask others to do this if I weren’t willing and able to do it myself.  So as a trial and proof of concept, I’ve actively posted all my interactions related to Domains 2019 that I was interested in to my own website using the tag Domains 2019.  At that URL, you’ll find all the things I liked and bookmarked, as well as the bits of conversation on Twitter and others’ sites that I’ve commented on or replied to. All of it originated on my own domain, and, when it appeared on Twitter, it was syndicated only secondarily so that others would see it since that was where the conversation was generally being aggregated. You can almost go back and recreate my entire Domains 2019 experience in real time by following my posts, notes, and details on my personal website.

So, next time around can we make an attempt to dump Twitter!? The technology for pulling it off certainly already exists, and is reasonably well-supported by WordPress, WithKnown, Grav, and even some of the static site generators I noticed in my brief survey above. (Wix obviously doesn’t even come close…)

I’m more than happy to help people build and flesh out the infrastructure necessary to try to make the jump. Even if just a few of us began doing it, we could serve as that all-important model for others as well as for our students and other constituencies. With a bit of help and effort before the next Domains Conference, I’ll bet we could collectively pull it off. I think many of us are either well- or even over-versed in the toxicities and surveillance underpinnings of social media, learning management systems, and other digital products in the edtech space, but now we ought to attempt a move away from it with an infrastructure that is our own–our Domains.

Replied to a tweet by femedtechfemedtech (Twitter)
Five is far from enough. Here’s just a few (in no particular order):

Kathleen FitzpatrickCathie LeBlancRobin DeRosa, Amy Collier, Audrey Watters, Amy GuyKimberly Hirsh, Catherine Cronin, Martha Burtis, Autumn CainesChristina Hendricks, Maha Bali, Lee Skallerup Bessette, Meredith Broussard, Helen DeWaard, Devon Zuegel, Kate BowlesIrene Stewart, Rachel Cherry, Jess Reingold, Laura PasquiniLaura Gibbs, Lora Taub-Pervizpour, Hilary Mason, Miriam PosnerKay Oddone, Rayna Harris, Amber Case, Teodora Petkova, Anelise H. Shrout, Jean MacDonald, Natalie Lafferty, Lauren Brumfield, Meredith Fierro

And don’t just follow them on Twitter, fill your brain up by following their longer thoughts in the feeds from their own domains, which I’ve linked. This way you won’t miss anything truly important in the overwhelming flow of Twitter and other social media.

Hypothes.is doesn’t have a social media-like follow functionality baked into the system, but there are a few methods to follow interesting people. My favorite, and possibly the simplest, is to add https://hypothes.is/stream.atom?user=abcxyz as a feed into my feed reader where abcxyz is the username of the person I’d like to follow.

So to subscribe to my Hypothes.is feed you’d add https://hypothes.is/stream.atom?user=chrisaldrich to your reader.

Of course, the catch then is to find/discover interesting people to follow this way. Besides some of the usual interesting subjects like Jon Udell, Jeremy Dean, Remi Kalir, et al. Who else should I be following?

Ideally by following interesting readers, you’ll find not only good things to read for yourself, but you’ll also have a good idea which are the best parts as well as what your friends think of those parts. The fact that someone is bothering to highlight or annotate something is a very strong indicator that they’ve got some skin in the game and the article is likely worth reading.

🔖 “The New Old Web: Preserving the Web for the Future With Containers” | Ilya Kreymer

Bookmarked "The New Old Web: Preserving the Web for the Future With Containers" by Ilya Kreymer (docs.google.com)
This talk will present innovative uses of Docker containers, emulators and web archives to allow anyone to experience old web sites using old web browsers, as demonstrated by the Webrecorder and oldweb.today projects. Combining containerization with emulation can provide new techniques in preserving both scholarly and artistic interactive works, and enable obsolete technologies like Flash and Java applets to be accessible today and in the future. The talk will briefly cover the technology and how it can be deployed both locally and in the cloud. Latest research in this area, such as automated preservation of education publishing platforms like Scalar will also be presented. The presentation will include live demos and users will also be invited to try the latest version of oldweb.today and interact with old browsers directly in their browser. The Q&A will help serve to foster a discussion on the potential opportunities and challenges of containerization technology in ‘future-proofing’ interactive web content and software.
hat tip:

An Invitation to IndieWeb Summit 2019

Fellow educators, teachers, specialists, instructional designers, web designers, Domains proponents, programmers, developers, students, web tinkerers, etc.,

  • Want to expand the capabilities of what your own domain is capable of?
  • Interested in improving the tools available on the open web?
  • Want to help make simpler, ethical digital pedagogy a reality in a way that students and teachers can implement themselves without relying on predatory third-party platforms?
  • Are you looking to use your online commonplace book as an active hub for your research, writing, and scholarship?

Bring your ideas and passions to help us all brainstorm, ruminate, and then with help actually design and build the version of the web we all want and need–one that reflects our values and desires for the future.

I’d like to invite you all to the 9th Annual IndieWeb Summit in Portland, Oregon, USA on June 29-30, 2019. It follows a traditional BarCamp style format, so the conference is only as good as the attendees and the ideas they bring with them, and since everyone is encouraged to actively participate, it also means that everyone is sure to get something interesting and valuable out of the experience.

We need more educators, thinkers, and tinkerers to begin designing and building the ethical , , and interactive pedagogy systems we all want.

Come and propose a session on a topic you’re interested in exploring and building toward with a group of like-minded people.

While on-site attendance can be exciting and invigorating for those who can come in person, streaming video and online tools should be available to make useful and worthwhile virtual attendance of all the talks, sessions, and even collaborative build time a real possibility as well. I’ll also note that travel assistance is also available for the Summit if you’d like to apply for it, or you’re able to donate funds to help others.

I hope you can all attend, and I encourage you to invite along friends, students, and colleagues.  

I heartily encourage those who don’t yet have a domain of their own to join in the fun. You’ll find lots of help and encouragement at camp and within the IndieWeb community so that even if you currently think you don’t have any skills, you can put together the resources to get something up and working before the Summit’s weekend is over. We’re also around nearly 24/7 in online chat to continue that support and encouragement both before and after the event so you can continue iterating on things you’d like to have working on your personal website.

Never been to an IndieWebCamp? Click through for some details about what to expect. Still not sure? feel free to touch base in any way that feels comfortable for you. 

Register today: https://2019.indieweb.org/summit#register

👤 @kfitz @holden @btopro @actualham @Downes @bali_maha @timmmmyboy @dr_jdean @cogdog @xolotl @cathieleblanc @BryanAlexander @hibbittsdesign @greeneterry @judell @CathyNDavidson @krisshaffer @readywriting @dancohen @wiobyrne @brumface @MorrisPelzel @econproph @mburtis @floatingtim @ralphbeliveau @ltaub @laurapasquini @amichaelberman @ken_bauer @TaylorJadin @courosa @nlafferty @KayOddone @OnlineCrsLady @opencontent @davecormier @edtechfactotum @daveymoloney @remikalir @jgmac1106 @MiaZamoraPhD @digpedlab @catherinecronin @HybridPed @jimgroom @rboren @cplong @anarchivist @edublogs @jasonpriem @meredithfierro @Autumm @grantpotter @daniellynds @sundilu @OERConf @fncll @jbj @Jessifer @AneliseHShrout @karencang @kmapesy @harmonygritz @slzemke @KeeganSLW @researchremix @JohnStewartPhD @villaronrubia @kreshleman @raynamharris @jessreingold @mattmaldre

Read Power, Polarization, and Tech by Chris (hypervisible.com)
In Howard Zinn’s A People’s History of the United States, he writes about early colonists and how the rich were feeling the heat of poor white folks and poor black folks associating too closely with each other. The fear was that the poor, despite being different races, would unite against their wealthy overlords. Shortly after, the overlords began to pass laws that banned fraternization between the races. The message to poor whites was clear: “you are poor, but you are still far better than that poor black person over there, because you are white.” Polarization is by design, for profit.

👓 Pop Up Ed Tech, Trust, and Ephemerality | ammienoot.com

Read Pop Up Ed Tech, Trust, and Ephemerality (ammienoot.com)
This post captures a back and forth text conversation that Tannis Morgan and I had about an idea that piqued her interest from my NGDLE rant in 2017. I really enjoyed the way we worked this up between us. I wrote a lot of it fast and off the cuff and I’m sure with editing it would be more coherent, but hey ho, it can stand. As an aside we used the excellent Etherpad setup courtesy of the B.C. OpenETC. Etherpad remains one of my favourite tools for super-simple collaborative writing.

IndieWeb Book Club: Ruined By Design

Some of us have thought about doing it before, but perhaps just jumping into the water and trying it out may be the best way to begin designing, testing, and building a true online IndieWeb Book Club.

Ruined By Design

Title and author on a white background at the top with a red filtered view of an atomic mushroom cloud explosion on the Bikini atoll in the Pacific Ocean

Earlier this week I saw a notice about an upcoming local event for Mike Monteiro‘s new book Ruined by Design: How Designers Destroyed the World, and What We Can Do to Fix It (Mule Books, March 2019, ISBN: 978-1090532084). Given the IndieWeb’s focus on design which is built into several of their principles, I thought this looked like a good choice for kicking off such an IndieWeb Book Club.

Here’s the description of the book from the publisher:

The world is working exactly as designed. The combustion engine which is destroying our planet’s atmosphere and rapidly making it inhospitable is working exactly as we designed it. Guns, which lead to so much death, work exactly as they’re designed to work. And every time we “improve” their design, they get better at killing. Facebook’s privacy settings, which have outed gay teens to their conservative parents, are working exactly as designed. Their “real names” initiative, which makes it easier for stalkers to re-find their victims, is working exactly as designed. Twitter’s toxicity and lack of civil discourse is working exactly as it’s designed to work.The world is working exactly as designed. And it’s not working very well. Which means we need to do a better job of designing it. Design is a craft with an amazing amount of power. The power to choose. The power to influence. As designers, we need to see ourselves as gatekeepers of what we are bringing into the world, and what we choose not to bring into the world. Design is a craft with responsibility. The responsibility to help create a better world for all. Design is also a craft with a lot of blood on its hands. Every cigarette ad is on us. Every gun is on us. Every ballot that a voter cannot understand is on us. Every time social network’s interface allows a stalker to find their victim, that’s on us. The monsters we unleash into the world will carry your name. This book will make you see that design is a political act. What we choose to design is a political act. Who we choose to work for is a political act. Who we choose to work with is a political act. And, most importantly, the people we’ve excluded from these decisions is the biggest (and stupidest) political act we’ve made as a society.If you’re a designer, this book might make you angry. It should make you angry. But it will also give you the tools you need to make better decisions. You will learn how to evaluate the potential benefits and harm of what you’re working on. You’ll learn how to present your concerns. You’ll learn the importance of building and working with diverse teams who can approach problems from multiple points-of-view. You’ll learn how to make a case using data and good storytelling. You’ll learn to say NO in a way that’ll make people listen. But mostly, this book will fill you with the confidence to do the job the way you always wanted to be able to do it. This book will help you understand your responsibilities.

I suspect that this book will be of particular interest to those in the IndieWeb, A Domain of One’s Own, the EdTech space (and OER), and really just about anyone.

How to participate

I’m open to other potential guidelines and thoughts since this is incredibly experimental at best, but I thought I’d lay out the following broad ideas for how we can generally run the book club and everyone can keep track of the pieces online. Feel free to add your thoughts as responses to this post or add them to the IndieWeb wiki’s page https://indieweb.org/IndieWeb_Book_Club.

  • Buy the book or get a copy from your local bookstore
  • Read it along with the group
  • Post your progress, thoughts, replies/comments, highlights, annotations, reactions, quotes, related bookmarks, podcast or microcast episodes, etc. about the book on your own website on your own domain. If your site doesn’t support any of these natively, just do your best and post simple notes that you can share. In the end, this is about the content and the discussion first and the technology second, but feel free to let it encourage you to improve your own site for doing these things along the way.
    • Folks can also post on other websites and platforms if they must, but that sort of defeats some of the purpose of the Indie idea, right?
  • Syndicate your thoughts to indieweb.xyz to the stub indieweb.xyz/en/bookclub/ as the primary location for keeping track of our conversation. Directions for doing this can be found at https://indieweb.xyz/howto/en.
  • Optionally syndicate them to other services like Twitter, Facebook, Instagram, LinkedIn, etc.
  • Optionally mention this original post, and my website will also aggregate the comments via webmention to the comment section below.
  • At regular intervals, check in on the conversations linked on indieweb.xyz/en/bookclub/ and post your replies and reactions about them on your own site.

If your site doesn’t support sending/receiving webmentions (a special type of open web notifications), take a look at Aaron Parecki’s post Sending your first Webmention and keep in mind that you can manually force webmentions with services like Telegraph or Mention-Tech

I’ll also try to keep track of entries I’m aware about on my own site as read or bookmark posts which I’ll tag with (ostensibly for IndieWeb Book Club Mike Monteiro), which we can also use on other social silos for keeping track of the conversation there.

Perhaps as we move along, I’ll look into creating a planet for the club as well as aggregating OPML files of those who create custom feeds for their posts. If I do this it will only be to supplement the aggregation of posts at the stub on indieweb.xyz which should serve as the primary hub for the club’s conversation.

If you haven’t run across it yet you can also use gRegor Morrill‘s IndieBookClub.biz tool in the process. 

If you don’t already have your own website or domain to participate, feel free to join in on other portions of social media, but perhaps consider jumping into the IndieWeb chat to ask about how to get started to better own your online identity and content. 

If you need help putting together your own site, there are many of us out here who can help get you started. I might also recommend using micro.blog which is an inexpensive and simple way to have your own website. I know that Manton Reece has already purchased a copy of the book himself. I hope that he and the rest of the micro.blog community will participate  along with us.

If you feel technically challenged, please ping me about your content and participation, and I’m happy to help aggregate your posts to the indieweb.xyz hub on your behalf. Ideally a panoply of people participating on a variety of technical levels and platforms will help us create a better book club (and a better web) for the future.

Of course, if you feel the itch to build pieces of infrastructure into your own website for improved participation, dive right in. Feel free to document what you’re doing both your own website and the IndieWeb wiki so others can take advantage of what you’ve come up with. Also feel free to join in on upcoming Homebrew Website Clubs (either local or virtual) or IndieWebCamps to continue brainstorming and iterating in those spaces as well.

Kickoff and Timeline

I’m syndicating this post to IndieNews for inclusion into next week’s IndieWeb newsletter which will serve as a kickoff notice. That will give folks time to acquire a copy of the book and start reading it. Of course this doesn’t mean that you couldn’t start today.

Share and repost this article with anyone you think might enjoy participating in the meanwhile.

I’ll start reading and take a stab at laying out a rough schedule. If you’re interested in participating, do let me know; we can try to mold the pace to those who actively want to participate.

I’ve already acquired a copy of the book and look forward to reading it along with you.