👓 Limits, schlimits: It’s time to rethink how we teach calculus | Ars Technica

Read Limits, schlimits: It’s time to rethink how we teach calculus by Jennifer OuletteJennifer Oulette (Ars Technica)
Ars chats with math teacher Ben Orlin about his book Change Is the Only Constant.

Finally, I decided to build it around all my favorite stories that touched on calculus, stories that get passed around in the faculty lounge, or the things that the professor mentions off-hand during a lecture. I realized that all those little bits of folklore tapped into something that really excited me about calculus. They have a time-tested quality to them where they’ve been told and retold, like an old folk song that has been sharpened over time.

And this is roughly how memory and teaching has always worked. Stories and repetition.
–November 11, 2019 at 09:56AM

Replied to a tweet by Tom WoodwardTom Woodward (Twitter)

All the credit really goes to Ryan Barrett and the huge open source crowd in the #IndieWeb who provide a truly magic tech layer for adding onto the and space. If you haven’t tried it, step on in and say hello! 👋

🎧 @hypervisible | Gettin’ Air with Terry Green

Listened to @hypervisible from voicEd | voiced.ca
Gettin’ Air with @hypervisible Professor of English at Macomb Community College. Self described as the “The Beavis of Twitter”, we chat about how he works to raise awareness of the absurd and abusive tech practices of various platforms and companies and how he helps his students navigate this increasingly dystopian world. “There’s always material to talk about how something sucks if you’re in the ed-tech space”.

The description far undersells the great conversation here. I almost wish it had been recorded just after Halloween so we might have gotten some discussion about Ring Doorbells recording millions of children and then promoting that scary fact in their marketing and social media.

Somehow @hypervisible blocked me on Twitter, which is a painful shame–at least for me. He’s one of the few researchers to have done so and one of the few people it’s worth having a separate account just for reading his content. I’m glad that others like Terry help to get his message out in other ways!

Chris also mentions a great list of recommended reads at 11:30 into the episode including:

👓 For-profit, faux-pen, and critical conversations about the future of learning materials | Rajiv Jhangiani, Ph.D.

Read For-profit, faux-pen, and critical conversations about the future of learning materials by Rajiv Jhangiani, Ph.D. (Rajiv Jhangiani, Ph.D.)
I remember the first time I heard the term “free riders” being used in the context of the open education movement. It was at the Open Education Conference in 2015 in Vancouver when, dur…

Of course the Open Education conference is just an open education conference and it certainly isn’t the only place to have these conversations. Regional events such as the Northeast OER Summit, the Cascadia Open Education Summit, Wisconsin’s E-ffordability Summit, the Statewide Colorado OER conference and others are wonderful options. Further afield, the OER conference and the Open Education Global conference are both events that welcome critical conversations. As do other events like Digital Pedagogy Lab and the many virtual conference hallway conversations facilitated by Virtually Connecting.

Nice list of open education and OER related conferences and communities.

Expanding Ekphrasis to the Broader Field of Mnemotechny: or How the Shield of Achilles Relates to a Towel, Car, and Water Buffalo

If Lynne Kelly‘s thesis about the methods of memory used by indigenous peoples is correct, and I strongly believe it is, then the concept of ekphrasis as illustrated in the description of the Shield of Achilles in Homer’s Iliad (Book 18, lines 478–608) is far more useful than we may have previously known. I strongly suspect that Achilles’ Shield is an early sung version of a memory palace to which were once attached other (now lost) memories from Bronze Age Greece.

The word ekphrasis, or ecphrasis, comes from the Greek for the description of a work of art produced as a rhetorical exercise, often used in the adjectival form ekphrastic.—Wikipedia

While many may consider this example of Homer’s to be the first instance of ekphrasis within literature (primarily because it specifically depicts an artwork, which is part of the more formal definition of the word), I would posit that even earlier descriptions in the Iliad itself which go into great detail about individuals and their methods of death are also included in a broader conception of ekphrasis. This larger ekphrasis subsumes all of these descriptions in an tradition of orality as being portions of ancient memory palaces within a broader field of mnemotechny. I imagine that these graphic, bloody, and larger-than-life depictions of death not only encoded the names and ideas of the original people/ancestors, but they were also quite likely to have had additional layers of memory encoded (or attached) to them as well. Here I’m suggesting that while an actual shield may or may not have originally existed that even once the physical shield or other object is gone or lost that the remembered story of the shield still provides a memory palace to which other ideas can be attached.

(I’ll remind the forgetful reader than mnemotechny grows out of the ancient art of rhetoric as envisioned in Rhetorica ad Herennium, and thus the use of ekphrasis as a rhetorical device implicitly subsumes the idea of memory, though most modern readers may not have that association.)

Later versions of ekphrasis in post-literate history may have been more about the arts themselves and related references and commentary (example: Keats’ Ode on a Grecian Urn), but I have a strong feeling that this idea’s original incarnation was more closely related to early memory methods at the border of oral and literate societies.

In other words, ancient performers, poets, etc. may have created their own memory palaces by which they were able to remember long stories like the Iliad, but what is to say that these stories themselves weren’t in turn memory palaces to the listeners themselves? I myself have previously used the plot and portions of the movie Fletch as a meta memory palace in just this way. As the result of ritualistic semi-annual re-watchings of classic and engaging movies like this, I can dramatically expand my collection of memory palaces. The best part is that while my exterior physical location may change, classics movies will always stay the same. And in a different framing, my memories of portions of history may also help me recall a plethora of famous movie quotes as well.

Can I borrow your towel? My car just hit a water buffalo.—Irwin M. Fletcher

Social Reading User Interface for Discovery

I read quite a bit of material online. I save “bookmarks” of all of it on my personal website, sometimes with some additional notes and sometimes even with more explicit annotations. One of the things I feel like I’m missing from my browser, browser extensions, and/or social feed reader is a social layer overlay that could indicate that people in my social network(s) have read or interacted directly with that page (presuming they make that data openly available.)

One of the things I’d love to see pop up out of the discovery explorations of the IndieWeb or some of the social readers in the space is the ability to uncover some of this social reading information. Toward this end I thought I’d collect some user interface examples of things that border on this sort of data to make the brainstorming and building of such functionality easier in the near future.

If I’m missing useful examples or you’d like to add additional thoughts, please feel free to comment below.

Examples of social reading user interface for discovery

Google

I don’t often search for reading material directly, but Google has a related bit of UI indicating that I’ve visited a website before. I sort of wish it had the ability to surface the fact that I’ve previously read or bookmarked an article or provided data about people in my social network who’ve done similarly within the browser interface for a particular article (without the search.) If a browser could use data from my personal website in the background to indicate that I’ve interacted with it before (and provide those links, notes, etc.), that would be awesome!

Screen capture for Google search of Kevin Marks with a highlight indicating that I've visited this page in the recent past
Screen capture for Google search of Kevin Marks with a highlight indicating that I’ve visited his page several times in the past. Given the March 2017 date, it’s obvious that the screen shot is from a browser and account I don’t use often.

I’ll note here that because of the way I bookmark or post reads on my own website, my site often ranks reasonably well for those things.

On a search for an article by Aaron Parecki, my own post indicating that I’ve read it in the past ranks second right under the original.

In some cases, others who are posting about those things (reading, commenting, bookmarking, liking, etc.) in my social network also show up in these sorts of searches. How cool would it be to have a social reader that could display this sort of social data based on people it knows I’m following

A search for a great article by Matthias Ott shows that both I and several of my friends (indicated by red arrows superimposed on the search query) have read, bookmarked, or commented on it too.

Hypothes.is

Hypothes.is is a great open source highlighting, annotation, and bookmarking tool with a browser extension that shows an indicator of how many annotations  appear on the page. In my experience, higher numbers often indicate some interesting and engaging material. I do wish that it had a follower/following model that could indicate my social sphere has annotated a page. I also wouldn’t mind if their extension “bug” in the browser bar had another indicator in the other corner to indicate that I had previously annotated a page!

Screen capture of Vannevar Bush’s article As We May Think in The Atlantic with a Hypothes.is browser extension bug indicating that there are 329 annotations on the page.

Reading.am

It doesn’t do it until after-the-fact, but Reading.am has a pop up overlay through its browser extension. It adds me to the list of people who’ve read an article, but it also indicates others in the network and those I’m following who have also read it (sometimes along with annotations about their thoughts).

What I wouldn’t give to see that pop up in the corner before I’ve read it!

Reading.am’s social layer creates a yellow colored pop up list in the upper right of the browser indicating who else has read the article as well as showing some of their notes on it. Unfortunately it doesn’t pop up until after you’ve marked the item as read.

Nuzzel

Nuzzel is one of my favorite tools. I input my Twitter account as well as some custom lists and it surfaces articles that people in my Twitter network have been tweeting about. As a result, it’s one of the best discovery tools out there for solid longer form content. Rarely do I read content coming out of Nuzzel and feel robbed. Because of how it works, it’s automatically showing those people in my network and some of what they’ve thought about it. I love this contextualization.

Nuzzel’s interface shows the title and an excerpt of an article and also includes the avatars, names, network, and commentary of one’s friends that interacted with the piece. In this example it’s relatively obvious that one reader influenced several others who retweeted it because of her.

Goodreads

Naturally sites for much longer form content will use social network data about interest, reviews, and interaction to a much greater extent since there is a larger investment of time involved. Thus social signaling can be more valuable in this context. A great example here is of Goodreads which shows me those in my network who are interested in reading a particular book or who have written reviews or given ratings.

A slightly excerpted/modified screen capture of the Goodreads page for Melanie Mitchell’s book Complexity that indicates several in my social network are also interested in reading it.

Are there other examples I’m missing? Are you aware of similar discovery related tools for reading that leverage social network data?

Replied to #oext372 #oextend What is inside your open education fortune cookie? | The Daily Extend (extend-daily.ecampusontario.ca)

What will be your future in open education? It will likely not be written inside a fortune cookie, but you can make sure you get the fortune you want.

Use the PhotoFunia Fortune Cookie Generator to produce the one you would like to see happen for yourself.

Borrowed from the Mural UDG Daily Opener 17

Here’s what I found in my fortune cookie:

A plate of fortune cookies with one broken open containing the fortune "A more open and independent web is yours for the making. #IndieWeb"

Photo made using PhotoFunia

Replied to #oext374 #oextend Send Someone a Message from the Internet Archive’s Great 78 Collection | The Daily Extend (extend-daily.ecampusontario.ca)

Not only is it digitized analog, it’s an amazing open resource of music history. The Internet Archive’s Great 78 Project has over 150,000 digitized 78rpm discs

Browse the collection, and look for a title the represents how you feel about Ontario Extend, or a colleague’s work. Tweet it out so we can all listen to the digital record spin.

Yes, Extend, You Are My Sunshine.

While looking forward to IndieWeb Summit this weekend, I take a listen back at the past courtesy of the Ontario Extend and the Great 78 Project at the Internet Archive.

Screenshot of Happy Days Will Come player page
Happy Days Will Come by University Dance Orchestra; Houser (1925)
From original 78rpm recording published by Grey Gull

Domains, power, the commons, credit, SEO, and some code implications

How to provide better credit on the web using the standard rel=“canonical” by looking at an example from the Open Learner Patchbook

A couple of weeks back, I noticed and began following Cassie Nooyen when I became aware of her at the Domains 2019 conference which I followed fairly closely online.

She was a presenter and wrote a couple of nice follow up pieces about her experiences on her website. I bookmarked one of them to read later, and then two days later I came across this tweet by Terry Green, who had also apparently noticed her post:

But I was surprised to see the link in the tweet points to a different post in the Open Learner Patchbook, which is an interesting site in and of itself.

This means that there are now at least two full copies of Cassie’s post online:

While I didn’t see a Creative Commons notice on Cassie’s original or any mention of permissions or even a link to the source of the original on the copy on the Open Patchbook, I don’t doubt that Terry asked Cassie for permission to post a copy of her work on his site. I’ll also suspect that it may have been the case that Cassie might not have wanted any attention drawn to herself or her post on her site and may have eschewed a link to it. I will note that the Open Patchbook did have a link to her Twitter presence as a means of credit. (I’ll still maintain that people should be preferring links to their own domain over Twitter for credits like these–take back your power!)

Even with these crediting caveats aside, there’s a subtle technical piece hiding here relating to search engines and search engine optimization that many in the Domain of One’s Own space may not realize exists, or if they do they may not be sure how to fix. This technical subtlety is that search engines attempt to assign proper credit too. As a result there’s a very high chance that Open Patchbook could rank higher in search for Cassie’s own post than Cassie’s original. As researchers and educators we’d obviously vastly prefer the original to get the credit. So what’s going on here?

Search engines use a web standard known as rel=“canonical”, a microformat which is most often found in the HTML <header> of a web page. If we view the current source of the copy on the Open Learner Patchbook, we’ll see the following:

<link rel="canonical" href="http://openlearnerpatchbook.org/technology/patch-twenty-five-my-domain-my-place-to-grow/" />

According to the Microformats wiki:

By adding rel=“canonical” to a hyperlink, a page indicates that the destination of that hyperlink should be considered the preferred or definitive version of the current page. This helps search engines avoid duplicate content, and is useful for deciding how to link to a page when citing it.

In the case of our example of Cassie’s post, search engines will treat the two pages as completely separate, but will suspect that one is a duplicate of the other. This could have dramatic consequences for one or the other sites in which search engines will choose one to prefer over the other, and, in some cases, search engines may penalize one site for having duplicate content and not stating that fact (in their metadata). Typically this would have more drastic and averse consequences for Cassie’s original in comparison with an institutional site. 

How do we fix the injustice of this metadata? 

There are a variety of ways, but I’ll focus on several in the WordPress space. 

WordPress core has built-in functionality that should set the permalink for a particular page as the canonical one. This is why the Open Patchbook page displays the incorrect canonical link. Since most people are likely to already have an SEO related plugin installed on their site and almost all of them have this capability, this is likely the quickest and easiest method for being able to change canonical links for pages and posts. Two popular choices for this are Yoast and All in One SEO which have simple settings for inputting and saving alternate canonical URLs. Yoast documents the steps pretty well, so I’ll provide an example using All in One SEO:

  • If not done already, click the checkbox for canonical URLs in the “General Settings” section for the plugin generally found at /wp-admin/admin.php?page=all-in-one-seo-pack%2Faioseop_class.php.
  • For the post (or page) in question, within the All in One SEO metabox in the admin interface (pictured) put the full URL of the original posts’ location.
  • (Re-)publish the post.

Screenshot of the AIOSEO metabox with the field for the Canonical URL outlined in red

If you’re using another SEO plugin, it likely handles canonical URLs similarly, so check their documentation.

For aggregation websites, like the Open Learner Patchbook, there’s also another solid option for not only setting the canonical URL, but for more quickly copying the original post as well. In these cases I love PressForward, a WordPress plugin from the Roy Rosenzweig Center for History and New Media which was designed with the education space in mind. The plugin allows one to quickly gather, organize, and republish content from other places on the web. It does so in a smart and ethical way and provides ample opportunity for providing appropriate citations as well as, for our purposes, setting the original URL as the canonical one. Because PressForward is such a powerful and diverse tool (as well as a built-in feed reader for your WordPress website), I’ll refer users to their excellent documentations.

Another useful reason I’ll mention for using rel-canonical mark up is that I’ve seen cases in which using it will allow other web standards-based tools like Hypothes.is to match pages for highlights and annotations. I suspect that if the Open Patchwork page did have the canonical link specified that any annotations made on it with Hypothes.is should mirror properly on the original as well (and vice-versa). 

I also suspect that there are some valuable uses of this sort of small metadata-based mark up within the Open Educational Resources (OER) space.

In short, when copying and reposting content from an original source online, it’s both courteous and useful to mark the copy as such by putting a tag onto the URL of the original to provide it with the full credit as the canonical source.

👓 Don’t just Google it! First, let’s talk! | Jon Udell

Read Don’t just Google it! First, let’s talk! by Jon UdellJon Udell (Jon Udell)
Asking questions in conversation has become problematic. For example, try saying this out loud: “I wonder when Martin Luther King was born?” If you ask that online, a likely response is: “Just Google it!” Maybe with a snarky link: http://lmgtfy.com/?q=when was martin luther king born? https:...

I love the idea of this… It’s very similar to helping to teach young children how to attack and solve problems in mathematics rather than simply saying follow this algorithm.

👓 What Having a Domain for a Year Has Taught Me | Cassie Nooyen

Read What Having a Domain for a Year Has Taught Me by Cassie Nooyen (techbar.crnooyen.knight.domains)
This summer marks the one-year anniversary of acquiring my domain through St. Norbert’s “Domain of One’s Own” program Knight Domains. I have learned a few important lessons over the past year about what having your own domain can mean.

📺 Is that a toothpick or a flux capacitor? Oh wait, it’s Google Sheets. | Domains 2019 | Jeff Everhart, Tom Woodward, Matt Roberts

Watched Is that a toothpick or a flux capacitor? Oh wait, it's Google Sheets. by Jeff Everhart, Tom Woodward, Matt Roberts from Domains 2019 | YouTube

Are you looking for low stakes ways to store and display data? Welp, here’s Google Sheets. Do you want to automate all of the boring parts of your job and sip a drink on a beach somewhere? Looks like you owe Google Sheets a beer. Have you ever wanted to build a lightweight full stack application without spinning up an orchestrated Docker container cluster running on AWS using Typescript that has 90% unit test coverage. Well, hold on to your hats, cause Google Sheets is about to hit 88 MPH while keeping your molecular structure intact.

At VCU’s ALT Lab, we’ve used Google Sheets to build educational experiences that range from novel, to complex, to entirely absurd. Brace yourself for temporal displacement and a little but of JavaScript.

There’s some low-level stuff here that could be dovetailed with IFTTT.com to do some simple automation for maybe doing Snarfed’s backfeed problem.

Domains 2019 Reflections from Afar

My OPML Domains Project

Not being able to attend Domains 2019 in person, I was bound and determined to attend as much of it as I could manage remotely. A lot of this revolved around following the hashtag for the conference, watching the Virtually Connecting sessions, interacting online, and starting to watch the archived videos after-the-fact. Even with all of this, for a while I had been meaning to flesh out my ability to follow the domains (aka websites) of other attendees and people in the space. Currently the easiest way (for me) to do this is via RSS with a feed reader, so I began collecting feeds of those from the Twitter list of Domains ’17 and Domains ’19 attendees as well as others in the education-related space who tweet about A Domain of One’s Own or IndieWeb. In some sense, I would be doing some additional aggregation work on expanding my blogroll, or, as I call it now, my following page since it’s much too large and diverse to fit into a sidebar on my website.

For some brief background, my following page is built on some old functionality in WordPress core that has since been hidden. I’m using the old Links Manager for collecting links and feeds of people, projects, groups, and institutions. This link manager creates standard OPML files, which WordPress can break up by categories, that can easily be imported/exported into most standard feed readers. Even better, some feed readers like Inoreader, support OPML subscriptions, so one could subscribe to my OPML file, and any time I update it in the future with new subscriptions, your feed reader would automatically update to follow those as well. I use this functionality in my own Inoreader account, so that any new subscriptions I add to my own site are simply synced to my feed reader without needing to be separately added or updated.

The best part of creating such a list and publishing it in a standard format is that you, dear reader, don’t need to spend the several hours I did to find, curate, and compile the list to recreate it for yourself, but you can now download it, modify it if necessary, and have a copy for yourself in just a few minutes. (Toward that end, I’m also happy to update it or make additions if others think it’s missing anyone interesting in the space–feedback, questions, and comments are heartily encouraged.) You can see a human-readable version of the list at this link, or find the computer parse-able/feed reader subscribe-able link here.

To make it explicit, I’ll also note that these lists also help me to keep up with people and changes in the timeframe between conferences.

Anecdotal Domains observations

In executing this OPML project I noticed some interesting things about the Domains community at large (or at least those who are avid enough to travel and attend in person or actively engage online). I’ll lay these out below. Perhaps at a future date, I’ll do a more explicit capture of the data with some analysis.

The largest majority of sites I came across were, unsurprisingly, WordPress-based, which made it much easier to find RSS feeds to read/consume material. I could simply take a domain name and add /feed/ to the end of the URL, and voilà, a relatively quick follow!

There are a lot of people whose sites didn’t have obvious links to their feeds. To me this is a desperate tragedy for the open web. We’re already behind the eight ball compared to social media and corporate controlled sites, why make it harder for people to read/consume our content from our own domains? And as if to add insult to injury, the places on one’s website where an RSS feed link/icon would typically live were instead populated by links to corporate social media like Facebook, Twitter, and Instagram. In a few cases I also saw legacy links to Google+ which ended service and disappeared from the web along with a tremendous number of online identities and personal data on April 2, 2019. (Here’s a reminder to remove those if you’ve forgotten.) For those who are also facing this problem, there’s a fantastic service called SubToMe that has a universal follow button that can be installed or which works well with a browser bookmarklet and a wide variety of feed readers.

I was thrilled to see a few people were using interesting alternate content management systems/site generators like WithKnown and Grav. There were  also several people who had branched out to static site generators (sites without a database). This sort of plurality is a great thing for the community and competition in the space for sites, design, user experience, etc. is awesome. It’s thrilling to see people in the Domains space taking advantage of alternate options, experimenting with them, and using them in the wild.

I’ll note that I did see a few poor souls who were using Wix. I know there was at least one warning about Wix at the conference, but in case it wasn’t stated explicitly, Wix does not support exporting data, which makes any potential future migration of sites difficult. Definitely don’t use it for any extended writing, as cutting and pasting more than a few simple static pages becomes onerous. To make matters worse, Wix doesn’t offer any sort of back up service, so if they chose to shut your site off for any reason, you’d be completely out of luck. No back up + no export = I couldn’t recommend using.

If your account or any of your services are cancelled, it may result in loss of content and data. You are responsible to back up your data and materials. —Wix Terms of Use

I also noticed a few people had generic domain names that they didn’t really own (and not even in the sense of rental ownership). Here I’m talking about domain names of the form username.domainsproject.com. While I’m glad that they have a domain that they can use and generally control, it’s not one that they can truly exert full ownership over. (They just can’t pick it up and take it with them.) Even if they could export/import their data to another service or even a different content management system, all their old links would immediately disappear from the web. In the case of students, while it’s nice that their school may provide this space, it is more problematic for data portability and longevity on the web that they’ll eventually lose that institutional domain name when they graduate. On the other hand, if you have something like yourname.com as your digital home, you can export/import, change content management services, hosting companies, etc. and all your content will still resolve and you’ll be imminently more find-able by your friends and colleagues. This choice is essentially the internet equivalent of changing cellular providers from Sprint to AT&T but taking your phone number with you–you may change providers, but people will still know where to find you without being any the wiser about your service provider changes. I think that for allowing students and faculty the ability to more easily move their content and their sites, Domains projects should require individual custom domains.

If you don’t own/control your physical domain name, you’re prone to lose a lot of value built up in your permalinks. I’m also reminded of here of the situation encountered by faculty who move from one university to another. (Congratulations by the way to Martha Burtis on the pending move to Plymouth State. You’ll notice she won’t face this problem.)  There’s also the situation of Matthew Green, a security researcher at Johns Hopkins whose institutional website was taken down by his university when the National Security Agency flagged an apparent issue. Fortunately in his case, he had his own separate domain name and content on an external server and his institutional account was just a mirrored copy of his own domain.

If you’ve got it, flaunt it.
—Mel Brooks from The Producers (1968), obviously with the it being a referent to A Domain of One’s Own.

Also during my project, I noted that quite a lot of people don’t list their own personal/professional domains within their Twitter or other social media profiles. This seems a glaring omission particularly for at least one whose Twitter bio creatively and proactively claims that they’re an avid proponent of A Domain of One’s Own.

And finally there were a small–but still reasonable–number of people within the community for whom I couldn’t find their domain at all! A small number assuredly are new to the space or exploring it, and so I’d give a pass, but I was honestly shocked that some just didn’t.

(Caveat: I’ll freely admit that the value of Domains is that one has ultimate control including the right not to have or use one or even to have a private, hidden, and completely locked down one, just the way that Dalton chose not to walk in the conformity scene in The Dead Poet’s Society. But even with this in mind, how can we ethically recommend this pathway to students, friends, and colleagues if we’re not willing to participate ourselves?)

Too much Twitter & a challenge for the next Domains Conference

One of the things that shocked me most at a working conference about the idea of A Domain of One’s Own within education where there was more than significant time given to the ideas of privacy, tracking, and surveillance, was the extent that nearly everyone present gave up their identity, authority, and digital autonomy to Twitter, a company which actively represents almost every version of the poor ethics, surveillance, tracking, and design choices we all abhor within the edtech space.

Why weren’t people proactively using their own domains to communicate instead? Why weren’t their notes, observations, highlights, bookmarks, likes, reposts, etc. posted to their own websites? Isn’t that part of what we’re in all this for?!

One of the shining examples from Domains 2019 that I caught as it was occurring was John Stewart’s site where he was aggregating talk titles, abstracts, notes, and other details relevant to himself and his practice. He then published them in the open and syndicated the copies to Twitter where the rest of the conversation seemed to be happening. His living notebook– or digital commmonplace book if you will–is of immense value not only to him, but to all who are able to access it. But you may ask, “Chris, didn’t you notice them on Twitter first?” In fact, I did not! I caught them because I was following the live feed of some of the researchers, educators, and technologists I follow in my feed reader using the OPML files mentioned above. I would submit, especially as a remote participant/follower of the conversation, that his individual posts were worth 50 or more individual tweets. Just the additional context they contained made them proverbially worth their weight in gold.

Perhaps for the next conference, we might build a planet or site that could aggregate all the feeds of people’s domains using their categories/tags or other means to create our own version of the Twitter stream? Alternately, by that time, I suspect that work on some of the new IndieWeb readers will have solidified to allow people to read feeds and interact with that content directly and immediately in much the way Twitter works now except that all the interaction will occur on our own domains.

As educators, one of the most valuable things we can and should do is model appropriate behavior for students. I think it’s high time that when attending a professional conference about A Domain of One’s Own that we all ought to be actively doing it using our own domains. Maybe we could even quit putting our Twitter handles on our slides, and just put our domain names on them instead?

Of course, I wouldn’t and couldn’t suggest or even ask others to do this if I weren’t willing and able to do it myself.  So as a trial and proof of concept, I’ve actively posted all my interactions related to Domains 2019 that I was interested in to my own website using the tag Domains 2019.  At that URL, you’ll find all the things I liked and bookmarked, as well as the bits of conversation on Twitter and others’ sites that I’ve commented on or replied to. All of it originated on my own domain, and, when it appeared on Twitter, it was syndicated only secondarily so that others would see it since that was where the conversation was generally being aggregated. You can almost go back and recreate my entire Domains 2019 experience in real time by following my posts, notes, and details on my personal website.

So, next time around can we make an attempt to dump Twitter!? The technology for pulling it off certainly already exists, and is reasonably well-supported by WordPress, WithKnown, Grav, and even some of the static site generators I noticed in my brief survey above. (Wix obviously doesn’t even come close…)

I’m more than happy to help people build and flesh out the infrastructure necessary to try to make the jump. Even if just a few of us began doing it, we could serve as that all-important model for others as well as for our students and other constituencies. With a bit of help and effort before the next Domains Conference, I’ll bet we could collectively pull it off. I think many of us are either well- or even over-versed in the toxicities and surveillance underpinnings of social media, learning management systems, and other digital products in the edtech space, but now we ought to attempt a move away from it with an infrastructure that is our own–our Domains.