Deplatforming and making the web a better place

I’ve spent some time this morning thinking about the deplatforming of the abhorrent social media site Gab.ai by Google, Apple, Stripe, PayPal, and Medium following the Tree of Life shooting in Pennsylvania. I’ve created a deplatforming page on the IndieWeb wiki with some initial background and history. I’ve also gone back and tagged (with “deplatforming”) a few articles I’ve read or podcasts I’ve listened to recently that may have some interesting bearing on the topic.

The particular design question I’m personally looking at is roughly:

How can we reshape the web and social media in a way that allows individuals and organizations a platform for their own free speech and communication without accelerating or amplifying the voices of the abhorrent fringes of people espousing broadly anti-social values like virulent discrimination, racism, fascism, etc.?

In some sense, the advertising driven social media sites like Facebook, Twitter, et al. have given the masses the equivalent of not simply a louder voice within their communities, but potential megaphones to audiences previously far, far beyond their reach. When monetized against the tremendous value of billions of clicks, there is almost no reason for these corporate giants to filter or moderate socially abhorrent content.  Their unfiltered and unregulated algorithms compound the issue from a societal perspective. I look at it in some sense as the equivalent of the advent of machine guns and ultimately nuclear weapons in 20th century warfare and their extreme effects on modern society.

The flip side of the coin is also potentially to allow users the ability to better control and/or filter out what they’re presented on platforms and thus consuming, so solutions can relate to both the output as well as the input stages.

Comments and additions to the page (or even here below) particularly with respect to positive framing and potential solutions on how to best approach this design hurdle for human communication are more than welcome.


Deplatforming

Deplatforming or no platform is a form of banning in which a person or organization is denied the use of a platform (physical or increasingly virtual) on which to speak.

In addition to the banning of those with socially unacceptable viewpoints, there has been a long history of marginalized voices (particularly trans, LGBTQ, sex workers, etc.) being deplatformed in systematic ways.

The banning can be from any of a variety of spaces ranging from physical meeting spaces or lectures, journalistic coverage in newspapers or television to domain name registration, web hosting, and even from specific social media platforms like Facebookor Twitter. Some have used these terms as narrowly as in relation to having their Twitter “verified” status removed.

“We need to puncture this myth that [deplatforming]’s only affecting far-right people. Trans rights activistsBlack Lives Matterorganizers, LGBTQI people have been demonetized or deranked. The reason we’re talking about far-right people is that they have coverage on Fox News and representatives in Congress holding hearings. They already have political power.” — Deplatforming Works: Alex Jones says getting banned by YouTube and Facebook will only make him stronger. The research says that’s not true. in Motherboard 2018-08-10

Examples

Glenn Beck

Glenn Beck parted ways with Fox News in what some consider to have been a network deplatforming. He ultimately moved to his own platform consisting of his own website.

Reddit Communities

Reddit has previously banned several communities on its platform. Many of the individual users decamped to Voat, which like Gab could potentially face its own subsequent deplatforming.

Milo Yiannopoulos

Milo Yiannopoulos, the former Breitbart personality, was permanently banned from Twitter in 2016 for inciting targeted harassment campaigns against actress Leslie Jones. He resigned from Breitbart over comments he made about pedophilia on a podcast. These also resulted in the termination of a book deal with Simon & Schuster as well as the cancellation of multiple speaking engagements at Universities.

The Daily Stormer

Neo-Nazi site The Daily Stormer was deplatformed by Cloudflare in the wake of 2017’s “Unite the Right” rally in Charlottesville. Following criticism, Matthew Prince, Cloudflare CEO, announced that he was ending the Daily Stormer’s relationship with Cloudflare, which provides services for protecting sites against distributed denial-of service (DDoS) attacks and maintaining their stability.

Alex Jones/Infowars

Alex Jones and his Infowars were deplatformed by Apple, Spotify, YouTube, and Facebook in late summer 2018 for his Network’s false claims about the Newtown shooting.

Gab

Gab.ai was deplatformed from PayPal, Stripe, Medium , Apple, and Google as a result of their providing a platform for alt-right and racist groups as well as the shooter in the Tree of Life Synagogue shooting in October 2018

Gab.com is under attack. We have been systematically no-platformed by App Stores, multiple hosting providers, and several payment processors. We have been smeared by the mainstream media for defending free expression and individual liberty for all people and for working with law enforcement to ensure that justice is served for the horrible atrocity committed in Pittsburgh. Gab will continue to fight for the fundamental human right to speak freely. As we transition to a new hosting provider Gab will be inaccessible for a period of time. We are working around the clock to get Gab.com back online. Thank you and remember to speak freely.

—from the Gab.ai homepage on 2018-10-29

History

Articles

Research

See Also

  • web hosting
  • why
  • shadow banning
  • NIPSA
  • demonitazition – a practice (particularly leveled at YouTube) of preventing users and voices from monetizing their channels. This can have a chilling effect on people who rely on traffic for income to support their work (see also 1)
Syndicated copies to:

Following local Altadena and Pasadena News

I’ve been thinking more about local news lately, so I’ve taken some time to aggregate some of my local news sources. While I live in the Los Angeles area, it’s not like I’m eschewing the Los Angeles Times, but I wanted to go even more uber-local than this. Thus I’m looking more closely at my local Altadena and Pasadena news outlets. I’m a bit surprised to see just how many small outlets and options I’ve got! People say local news is dying or dead, so I thought I would only find two or three options–how wrong could I have been?

In addition to some straightforward journalistic related news sources, I’ve also included some additional local flavor news which includes town councils, the chamber of commerce, historical societies, etc. which have websites that produce feeds with occasional news items.

Going forward you can see these sources aggregated on my following page.

For those who are interested I’ve created an OPML file which contains the RSS feeds of all these sources if they’d like to follow them as well. Naturally most have other social media presences, but there’s usually no guarantee that if you followed them that way that you’ll actually see the news you wanted.

If anyone is aware of other sources, I’m happy to add them to the list.

Here’s the initial list of sources:

Syndicated copies to:

An IndieWeb talk at WordCamp Riverside in November 2018

I’ve submitted a talk for WordCamp Riverside 2018; it has been accepted.

My talk will help to kick off the day at 10am on Saturday morning in the “John Hughes High” room. The details for the camp and a link to purchase tickets can be found below.

WordCamp Riverside 2018

&
hosted at SolarMax, 3080 12th St., Riverside, CA 92507
Tickets are available now

Given that “Looking back to go forward” is the theme of the camp this year, I think I may have chosen the perfect topic. To some extent I’m going to look at how the nascent web has recently continued evolving from where it left off around 2006 before everyone abandoned it to let corporate silo services like Facebook and Twitter become responsible for how we use the web. We’ll talk about how WordPress can be leveraged to do a better job than “traditional” social media with much greater flexibility.

Here’s the outline:

The web is my social network: How I use WordPress to create the social platform I want (and you can too!)

Synopsis: Growing toxicity on Twitter, Facebook’s Cambridge Analytica scandal, algorithmic feeds, and a myriad of other problems have opened our eyes to the ever-growing costs of social media. Walled gardens have trapped us with the promise of “free” while addicting us to their products at the cost of our happiness, sense of self, sanity, and privacy. Can we take back our fractured online identities, data, and privacy to regain what we’ve lost?

I’ll talk about how I’ve used IndieWeb philosophies and related technologies in conjunction with WordPress as a replacement for my social presence while still allowing easy interaction with friends, family, and colleagues online. I’ll show how everyone can easily use simple web standards to make WordPress a user-controlled, first-class social platform that works across domains and even other CMSes.

Let’s democratize social media using WordPress and the open web, the last social network you’ll ever need to join.

Intended Audience: The material is introductory in nature and targeted at beginner and intermediate WordPressers, but will provide a crash course on a variety of bleeding edge W3C specs and tools for developers and designers who want to delve into them at a deeper level. Applications for the concepts can be of valuable to bloggers, content creators, businesses, and those who are looking to better own their online content and identities online without allowing corporate interests out-sized influence of their online presence.

I look forward to seeing everyone there!

Syndicated copies to:

Extending a User Interface Idea for Social Reading Online

This morning I was reading an article online and I bookmarked it as “read” using the Reading.am browser extension which I use as part of my workflow of capturing all the things I’ve been reading on the internet. (You can find a feed of these posts here if you’d like to cyber-stalk most of my reading–I don’t post 100% of it publicly.)

I mention it because I was specifically intrigued by a small piece of excellent user interface and social graph data that Reading.am unearths for me. I’m including a quick screen capture to better illustrate the point. While the UI allows me to click yes/no (i.e. did I like it or not) or even share it to other networks, the thing I found most interesting was that it lists the other people using the service who have read the article as well. In this case it told me that my friend Jeremy Cherfas had read the article.1

Reading.am user interface indicating who else on the service has read an article.

In addition to having the immediate feedback that he’d read it, which is useful and thrilling in itself, it gives me the chance to search to see if he’s written any thoughts about it himself, and it also gives me the chance to tag him in a post about my own thoughts to start a direct conversation around a topic which I now know we’re both interested in at least reading about.2

The tougher follow up is: how could we create a decentralized method of doing this sort of workflow in a more IndieWeb way? It would be nice if my read posts on my site (and those of others) could be overlain on websites via a bookmarklet or other means as a social layer to create engaged discussion. Better would have been the ability to quickly surface his commentary, if any, on the piece as well–functionality which I think Reading.am also does, though I rarely ever see it. In some sense I would have come across Jeremy’s read post in his feed later this weekend, but it doesn’t provide the immediacy that this method did. I’ll also admit that I prefer having found out about his reading it only after I’d read it myself, but having his and others’ recommendations on a piece (by their explicit read posts) is a useful and worthwhile piece of data, particularly for pieces I might have otherwise passed over.

In some sense, some of this functionality isn’t too different from that provided by Hypothes.is, though that is hidden away within another browser extension layer and requires not only direct examination, but scanning for those whose identities I might recognize because Hypothes.is doesn’t have a specific following/follower social model to make my friends and colleagues a part of my social graph in that instance. The nice part of Hypothes.is’ browser extension is that it does add a small visual indicator to show that others have in fact read/annotated a particular site using the service.

A UI example of Hypothes.is functionality within the Chrome browser. The yellow highlighted browser extension bug indicates that others have annotated a document. Clicking the image will take one to the annotations in situ.

I’ve also previously documented on the IndieWeb wiki how WordPress.com (and WordPress.org with JetPack functionality) facepiles likes on content (typically underneath the content itself). This method doesn’t take things as far as the Reading.am case because it only shows a small fraction of the data, is much less useful, and is far less likely to unearth those in your social graph to make it useful to you, the reader.

WordPress.com facepiles likes on content which could surface some of this social reading data.

I seem to recall that Facebook has some similar functionality that is dependent upon how (and if) the publisher embeds Facebook into their site. I don’t think I’ve seen this sort of interface built into another service this way and certainly not front and center the way that Reading.am does it.

The closest thing I can think of to this type of functionality in the analog world was in my childhood when library card slips in books had the names of prior patrons on them when you signed your own name when checking out a book, though this also had the large world problem that WordPress likes have in that one typically wouldn’t have know many of the names of prior patrons necessarily. I suspect that the Robert Bork privacy incident along with the evolution of library databases and bar codes have caused this older system to disappear.

This general idea might make an interesting topic to explore at an upcoming IndieWebCamp if not before. The question is: how to add in the social graph aspect of reading to uncover this data? I’m also curious how it might or might not be worked into a feed reader or into microsub related technologies as well. Microsub clients or related browser extensions might make a great place to add this functionality as they would have the data about whom you’re already following (aka your social graph data) as well as access to their read/like/favorite posts. I know that some users have reported consuming feeds of friends’ reads, likes, favorites, and bookmarks as potential recommendations of things they might be interested in reading as well, so perhaps this would be an additional extension of that as well?


[1] I’ve certainly seen this functionality before, but most often the other readers are people I don’t know or know that well because the service isn’t huge and I’m not using it to follow a large number of other people.
[2] I knew he was generally interested already as I happen to be following this particular site at his prior recommendation, but the idea still illustrates the broader point.

Syndicated copies to:

Some ideas about tags, categories, and metadata for online commonplace books and search

Earlier this morning I was reading The Difference Between Good and Bad Tags and the discussion of topics versus objects got me thinking about semantics on my website in general.

People often ask why WordPress has both a Category and a Tag functionality, and to some extent it would seem to be just for this thing–differentiating between topics and objects–or at least it’s how I have used it and perceived others doing so as well. (Incidentally from a functionality perspective categories in the WordPress taxonomy also have a hierarchy while tags do not.) I find that I don’t always do a great job at differentiating between them nor do I do so cleanly every time. Typically it’s more apparent when I go searching for something and have a difficult time in finding it as a result. Usually the problem is getting back too many results instead of a smaller desired subset. In some sense I also look at categories as things which might be more interesting for others to subscribe to or follow via RSS from my site, though I also have RSS feeds for tags as well as for post types/kinds as well.

I also find that I have a subtle differentiation using singular versus plural tags which I think I’m generally using to differentiate between the idea of “mine” versus “others”. Thus the (singular) tag for “commonplace book” should be a reference to my particular commonplace book versus the (plural) tag “commonplace books” which I use to reference either the generic idea or the specific commonplace books of others. Sadly I don’t think I apply this “rule” consistently either, but hope to do so in the future.

I’ve also been playing around with some more technical tags like math.NT (standing for number theory), following the lead of arXiv.org. While I would generally have used a tag “number theory”, I’ve been toying around with the idea of using the math.XX format for more technical related research on my site and the more human readable “number theory” for the more generic popular press related material. I still have some more playing around with the idea to see what shakes out. I’ve noticed in passing that Terence Tao uses these same designations on his site, but he does them at the category level rather than the tag level.

Now that I’m several years into such a system, I should probably spend some time going back and broadening out the topic categories (I arbitrarily attempt to keep the list small–in part for public display/vanity reasons, but it’s relatively easy to limit what shows to the public in my category list view.) Then I ought to do a bit of clean up within the tags themselves which have gotten unwieldy and often have spelling mistakes which cause searches to potentially fail. I also find that some of my auto-tagging processes by importing tags from the original sources’ pages could be cleaned up as well, though those are generally stored in a different location on my website, so it’s not as big a deal to me.

Naturally I find myself also thinking about the ontogeny/phylogeny problems of how I do these things versus how others at large do them as well, so feel free to chime in with your ideas, especially if you take tags/categories for your commonplace book/website seriously. I’d like to ultimately circle back around on this with regard to the more generic tagging done from a web-standards perspective within the IndieWeb and Microformats communities. I notice almost immediately that the “tag” and “category” pages on the IndieWeb wiki redirect to the same page yet there are various microformats including u-tag-of and u-category which are related but have slightly different meanings on first blush. (There is in fact an example on the IndieWeb “tag” page which includes both of these classes neither of which seems to be counter-documented at the Microformats site.) I should also dig around to see what Kevin Marks or the crew at Technorati must surely have written a decade or more ago on the topic.


cc: Greg McVerry, Aaron Davis, Ian O’Byrne, Kathleen Fitzpatrick, Jeremy Cherfas

Syndicated copies to:

Refback from IndieWeb Chat

It took me a moment to realize what it was exactly since I hadn’t yet added a field to indicate it, but since the IndieWeb chat doesn’t send webmentions by itself, I’m glad I support refbacks to be aware of comments on my posts. The avatar didn’t come through quite like it should, but it’s nice to be able to treat refbacks like any other type of mention.

Thanks David Shanske for the Refbacks plugin. Thanks Tantek for what I think is my first incoming “mention” from chat.

The chat has some reasonable microformats markup, so I suppose the parser could do a more solid job, but this is a pretty great start. Sadly, Refback isn’t as real-time as Webmention, but it’s better than nothing.

My first mention (aka refback) from the IndieWeb chat. Click on the photo to see the UI display on my site.

I suppose we could all be posting chats on our own sites and syndicating into places like IRC to own our two directional conversations, but until I get around to the other half… (or at least for WordPress, I recall having gotten syndication to IRC for WithKnown working a while back via plugin.)

Gems And Astonishments of Mathematics: Past and Present—Lecture One

Last night was the first lecture of Dr. Miller’s Gems And Astonishments of Mathematics: Past and Present class at UCLA Extension. There are a good 15 or so people in the class, so there’s still room (and time) to register if you’re interested. While Dr. Miller typically lectures on one broad topic for a quarter (or sometimes two) in which the treatment continually builds heavy complexity over time, this class will cover 1-2 much smaller particular mathematical problems each week. Thus week 11 won’t rely on knowing all the material from the prior weeks, which may make things easier for some who are overly busy. If you have the time on Tuesday nights and are interested in math or love solving problems, this is an excellent class to consider. If you’re unsure, stop by one of the first lectures on Tuesday nights from 7-10 to check them out before registering.

Lecture notes

For those who may have missed last night’s first lecture, I’m linking to a Livescribe PDF document which includes the written notes as well as the accompanying audio from the lecture. If you view it in Acrobat Reader version X (or higher), you should be able to access the audio portion of the lecture and experience it in real time almost as if you had been present in person. (Instructions for using Livescribe PDF documents.)

We’ve covered the following topics:

  • Class Introduction
  • Erdős Discrepancy Problem
    • n-cubes
    • Hilbert’s Cube Lemma (1892)
    • Schur (1916)
    • Van der Waerden (1927)
  • Sylvester’s Line Problem (partial coverage to be finished in the next lecture)
    • Ramsey Theory
    • Erdős (1943)
    • Gallai (1944)
    • Steinberg’s alternate (1944)
    • DeBruijn and Erdős (1948)
    • Motzkin (1951)
    • Dirac (1951)
    • Kelly & Moser (1958)
    • Tao-Green Proof
  • Homework 1 (homeworks are generally not graded)

Over the coming days and months, I’ll likely bookmark some related papers and research on these and other topics in the class using the class identifier MATHX451.44 as a tag in addition to topic specific tags.

Course Description

Mathematics has evolved over the centuries not only by building on the work of past generations, but also through unforeseen discoveries or conjectures that continue to tantalize, bewilder, and engage academics and the public alike. This course, the first in a two-quarter sequence, is a survey of about two dozen problems—some dating back 400 years, but all readily stated and understood—that either remain unsolved or have been settled in fairly recent times. Each of them, aside from presenting its own intrigue, has led to the development of novel mathematical approaches to problem solving. Topics to be discussed include (Google away!): Conway’s Look and Say Sequences, Kepler’s Conjecture, Szilassi’s Polyhedron, the ABC Conjecture, Benford’s Law, Hadamard’s Conjecture, Parrondo’s Paradox, and the Collatz Conjecture. The course should appeal to devotees of mathematical reasoning and those wishing to keep abreast of recent and continuing mathematical developments.

Suggested Prerequisites

Some exposure to advanced mathematical methods, particularly those pertaining to number theory and matrix theory. Most in the class are taking the course for “fun” and the enjoyment of learning, so there is a huge breadth of mathematical abilities represented–don’t not take the course because you feel you’ll get lost.

Register now

I’ve written some general thoughts, hints, and tips on these courses in the past.

Renovated Classrooms

I’d complained to the UCLA administration before about how dirty the windows were in the Math Sciences Building, but they went even further than I expected in fixing the problem. Not only did they clean the windows they put in new flooring, brand new modern chairs, wood paneling on the walls, new projection, and new white boards! I particularly love the new swivel chairs, and it’s nice to have such a lovely new environment in which to study math.

The newly renovated classroom space in UCLA’s Math Sciences Building

Category Theory for Winter 2019

As I mentioned the other day, Dr. Miller has also announced (and reiterated last night) that he’ll be teaching a course on the topic of Category Theory for the Winter quarter coming up. Thus if you’re interested in abstract mathematics or areas of computer programming that use it, start getting ready!

Syndicated copies to:

The Sixth “R” of Open Educational Resources

The 5 R’s

I’ve seen the five R’s used many times in reference to the OER space (Open Educational Resources). They include the ability to allow others to: Retain, Reuse, Revise, Remix and/or Redistribute content with the appropriate use of licenses. These are all some incredibly powerful building blocks, but I feel like one particularly important building block is missing–that of the ability to allow easy accretion of knowledge over time.

Version Control

Some in the educational community may not be aware of some of the more technical communities that use the idea of version control for their daily work. The concept of version control is relatively simple and there are a multitude of platforms and software to effectuate it including Git, GitHub, GitLab, BitBucket, SVN, etc. In the old days of file and document maintenance one might save different versions of the same general file with increasingly different and complex names to their computer hard drive: Syllabus.doc, Syllabus_revised.doc, Syllabus_revisedagain.doc, Syllabus_Final.doc, Syllabus_Final_Final.doc, etc. and by using either the names or date and timestamps on the file one might try to puzzle out which one was the correct version of the file that they were working on.

For the better part of a decade now there is what is known as version control software to allow people to more easily maintain a single version of their particular document but with a timestamped list of changes kept internally to allow users to create new updates or roll back to older versions of work they’ve done. While the programs themselves are internally complicated, the user interfaces are typically relatively easy to use and in less than a day one can master most of their functionality. Most importantly, these version control systems allow many people to work on the same file or resource at a time! This means that 10 or more people can be working on a textbook, for example, at the same. They create a fork  or clone of the particular project to their personal work space where they work on it and periodically save their changes. Then they can push their changes back to the original or master where they can be merged back in to make a better overall project. If there are conflicts between changes, these can be relatively easily settled without much loss of time. (For those looking for additional details, I’ve previously written Git and Version Control for Novelists, Screenwriters, Academics, and the General Public, which contains a variety of detail and resources.) Version control should be a basic tool of every educators’ digital literacy toolbox.

For the OER community, version control can add an additional level of power and capability to their particular resources. While some resources may be highly customized or single use resources, many of them, including documents like textbooks can benefit from the work of many hands in an accretive manner. If these resources are maintained in version controllable repositories then individuals can use the original 5 R’s to create their particular content.

But what if a teacher were to add several new and useful chapters to an open textbook? While it may be directly useful to their specific class, perhaps it’s also incredibly useful to the broader range of teachers and students who might use the original source in the future? If the teacher who forks the original source has a means of pushing their similarly licensed content back to the original in an easy manner, then not only will their specific class benefit from the change(s), but all future classes that might use the original source will have the benefit as well!

If you’re not sold on the value of version control, I’ll mention briefly that Microsoft spent $7.5 Billion over the summer to acquire GitHub, which is one of the most popular version control and collaboration tools on the market. Given Microsofts’ push into the open space over the past several years, this certainly bodes well for both open as well as version control for years to come.

Examples

A Math Text

As a simple example, lets say that one professor writes the bulk of a mathematics text, but twenty colleagues all contribute handfuls of particular examples or exercises over time. Instead of individually hosting those exercises on their own sites or within their individual LMSes where they’re unlikely to be easy to find for other adopters of the text, why not submit the changes back to the original to allow more options and flexibility to future teachers? Massive banks of problems will allow more flexibility for both teachers and students. Even if the additional problems aren’t maintained in the original text source, they’ll be easily accessible as adjunct materials for future adopters.

Wikipedia

One of the most powerful examples of the value of accretion in this manner is Wikipedia. While it’s somewhat different in form than some of the version control systems mentioned above, Wikipedia (and most wikis for that matter) have built in history views that allow users to see and track the trail of updates and changes over time. The Wikipedia in use today is vastly larger and more valuable today than it was on its first birthday because it allows ongoing edits to be not only improved over time, but those improvements are logged and view-able in a version controlled manner.

Google Documents

This is another example of an extensible OER platform that allows simple accretion. With the correct settings on a document, one can host an original and allow it to be available to others who can save it to their own Google Drive or other spaces. Leaving the ability for guests to suggest changes or to edit a document allows it to potentially become better over time without decreasing the value of the original 5 Rs.

Webmentions for Update Notifications

As many open educational resources are hosted online for easy retention, reuse, revision, remixing, and/or redistribution, keeping them updated with potential changes can potentially be a difficult proposition. It may not always be the case that resources are maintained on a single platform like GitHub or that users of these resources will necessarily know how to use these platforms or their functionality. As a potential “fix” I can easily see a means of leveraging the W3C recommended specification for Webmention as a means of keeping a tally of changes to resources online.

Let’s say Robin keeps a copy of her OER textbook on her WordPress website where students and other educators can easily download and utilize it. More often than not, those using it are quite likely to host changed versions of it online as well. If their CMS supports the Webmention spec like WordPress does via a simple plugin, then by providing a simple URL link as a means of crediting the original source, which they’re very likely to do as required by the Creative Commons license anyway, their site will send a notification of the copy’s existence to the original. The original can then display the webmentions as traditional comments and thus provide links to the chain of branches of copies which both the original creator as well as future users can follow to find individual changes. If nothing else, the use of Webmention will provide some direct feedback to the original author(s) to indicate their materials are being used. Commonly used education facing platforms like WordPress, Drupal, WithKnown, Grav, and many others either support the Webmention spec natively or do so with very simple plugins.

Editorial Oversight

One of the issues some may see with pushing updates back to an original surrounds potential resource bloat or lack of editorial oversight. This is a common question or issue on open source version control repositories already, so there is a long and broad history of for how these things are maintained or managed in cases where there is community disagreement, an original source’s maintainer dies, disappears, loses interest, or simply no longer maintains the original. In the end, as a community of educators we owe it to ourselves and future colleagues to make an attempt at better maintaining, archiving, and allowing our work to accrete value over time.

The 6th R: Request Update

In summation, I’d like to request that we all start talking about the 6 R’s which include the current 5 along with the addition of a Request update (or maybe pull Request, Recompile, or Report to keep it in the R family?) ability as well. OER is an incredibly powerful concept already, but could be even more so with the ability to push new updates or at least notifications of them back to the original. Having the ability to do this will make it far easier to spread and grow the value of the OER concept as well as to disrupt the education spaces OER was evolved to improve.

Featured photo by Amador Loureiro on Unsplash

Syndicated copies to:

Our Daily Bread — A short 30 day podcast history of wheat and bread in very short episodes

Drop what you’re doing and immediately go out to subscribe to Our Daily Bread: A history of wheat and bread in very short episodes!

Subscribe: Android | Google Podcasts | RSS | More

The illustrious and inimitable Jeremy Cherfas is producing a whole new form of beauty by talking about wheat and bread in a podcast for thirty days.

It’s bundled up as part of his longer-running Eat This Podcast series, which I’ve been savoring for years.

Now that you’re subscribed and your life will certainly be immeasurably better, a few thoughts about how awesome this all is…

Last December I excitedly ran across the all-too-well-funded podcast Modernist Breadcrumbs. While interesting and vaguely entertaining, it was an attempt to be a paean to bread while subtly masking the fact that it was an extended commercial for the book series Modernist Bread by Nathan Myhrvold and Francisco Migoya which had been released the month prior.

I trudged through the entire series (often listening at 1.5-2x speed) to pick up the worthwhile tidbits, but mostly being disappointed. As I finished listening to the series, I commented:

Too often I found myself wishing that Jeremy Cherfas had been picked up to give the subject a proper 10+ episode treatment. I suspect he’d have done a more interesting in-depth bunch of interviews and managed to weave a more coherent story out of the whole. Alas, twas never thus.

A bit later Jeremy took the time to respond to my comment:

I’ve no idea how the series actually came about, or what anyone aside from Chris really thought about it. It would be nice to see any kind of listener engagement, but it’s hard to find anything. There are three tweets over the entire series that use the show’s official tags.

Still, what’s done is done, and I doubt anyone would want to support another series all about bread. Or would they … ?

I’ll admit I did spend a few long and desperate weeks salivating with \hope over that ominously hanging “Or would they…?” statement. Ultimately, I let it pass distracted by listening to Jeremy’s regular Eat This Podcast episodes. Then this past week I’ve been bowled over by discovering what has obviously been fermenting since.

I’d love to take credit for “planting the seed” as it were for this new endeavour, but I suspect that the thousands upon thousands of adoring listening fans that Mssr. Cherfas’ podcast has, he’s heard dozens of similar requests every day over the years. Even more likely, it’s his very own love of bread that spawned the urge (he does, after all, have a bread blog named Fornacalia!), but I’ll quietly bask as if I had my very own personal suggestion box to have a first-class production staff at my beck and call to make me custom podcast content about food, science, and culture.

It’s always amazing to me how scintillating Jeremy Cherfas’ work manages to be in these. What is not to love about his editorial eye, interview skills, his writing, his production abilities? I’m ever astounded by the fact that his work is a simple one man show and not a 20 person production team.

I’m waiting for the day that the Food Network, The Cooking Channel, HGTV, or a network of their stripe (or perhaps NPR or PBS) discovers his supreme talent and steals him away from us to better fund and extend the reach of the culinary talent and story-telling he’s been churning out flawlessly for years now. (I’m selfishly hoping one of them snaps him up before some other smart, well-funded corporation steals him away from us for his spectacular communication abilities to dominate all his free time away from these food-related endeavors.)

Of course, if you’re a bit paranoid like me, perhaps you’d find his fantastic work is a worthwhile cause to donate to? Supporting his work means there’s more for everyone.

Now, to spend a moment writing up a few award nominations… perhaps the Beard first?

 

Syndicated copies to:

Think the unthinkable: My Version for the Future of Digital Teaching and Learning for EDU522

I’m still evolving what my version of the future of digital teaching and learning looks like, but I am certainly enamored of the idea of mixing in many ideas of the open internet and IndieWeb ways of approaching it all. Small, open, relatively standardized, and remixable pieces can hopefully help lower barriers to teachers and learners everywhere.

The ability to interact directly with a course website and the materials in a course using my own webspace/digital commonplace book via Webmention seems like a very powerful tool. I’m able to own/archive many or most of the course materials for later use and reflection. I’m also able to own all of my own work for later review or potential interaction with fellow classmates or the teacher. Having an easier ability to search my site for related materials to draw upon for use in synthesizing and creating new content, also owned on my own site, is particularly powerful.

Certainly there are some drawbacks and potential inequalities in a web-based approach, particularly for those who don’t have the immediate resources required to access materials, host their own site, own their own data, or even interact digitally. William Gibson has famously said, “The future is already here — it’s just not very evenly distributed.” Hopefully breaking down some of the barriers to accessibility in education for all will help the distribution.

There’s also questions relating to how open should things really be? How private (or not) should they be? Ideally teachers provide a large swath of openness, particularly when it comes to putting their materials in the commons for others to reuse or remix. Meanwhile allowing students to be a bit more closed if they choose to keep materials just for their own uses, to limit access to their own work/thoughts, or to potentially limit the audience of their work (eg. to teachers and fellow classmates) is a good idea. Recent examples within the social media sphere related to context collapse have provided us with valuable lessons about how long things should last, who should own them, how public should they be in the digital sphere? Students shouldn’t be penalized in the future for ideas they “tried on” while learning. Having the freedom and safety to make mistakes in a smaller arena can be a useful tool within teaching–those mistakes shouldn’t cost them again by being public at a later date. Some within the IndieWeb have already started experimenting with private webmentions and other useful tools like limiting audiences which may help these ideas along despite their not existing in a simple implementation for the masses yet.

Naturally the open web can be a huge place, so having some control and direction is always nice. I’ve always thought students should be given a bit more control over where they’re going and what they want out of a given course as well as the ability to choose their own course materials to some extent. Still having some semblance of outline/syllabus and course guidelines can help direct what that learning will actually be.

Some of what I see in EDU522 is the beginning of the openness and communication I’ve always wanted to see in education and pedagogy. Hopefully it will stand as an example for others who come after us.

Written with Module One: Who Am I? in mind.

IndieWeb technology for online pedagogy

An ongoing case study

Very slick! Greg McVerry, a professor, can post all of the readings, assignments, etc. for his EDU522 online course on his own website, and I can indicate that I’ve read the pieces, watched the videos, or post my responses to assignments and other classwork (as well as to fellow classmates’ work and questions) on my own website while sending notifications via Webmention of all of the above to the original posts on their sites.

When I’m done with the course I’ll have my own archive of everything I did for the entire course (as well as copies on the Internet Archive, since I ping it as I go). His class website and my responses there could be used for the purposes of grading.

I can subscribe to his feed of posts for the class (or an aggregated one he’s made–sometimes known as a planet) and use the feed reader of choice to consume the content (and that of my peers’) at my own pace to work my way through the course.

This is a lot closer to what I think online pedagogy or even the use of a Domain of One’s Own in an educational setting could and should be. I hope other educators might follow suit based on our examples. As an added bonus, if you’d like to try it out, Greg’s three week course is, in fact, an open course for using IndieWeb and DoOO technologies for teaching. It’s just started, so I hope more will join us.

He’s focusing primarily on using WordPress as the platform of choice in the course, but one could just as easily use other Webmention enabled CMSes like WithKnown, Grav, Perch, Drupal, et al. to participate.

Syndicated copies to:

An IndieWeb Magazine on Flipboard

This morning I set up an IndieWeb Magazine on Flipboard. While it is “yet another silo”, it’s one that I can easily and automatically syndicate content from my site (and others) into. I’ve already seeded it with some recent posts for those who’d like to start reading.

Until more tools and platforms like micro.blog exist to make it easy for other Generation 2+ people to join the IndieWeb, I thought it made at least some sense to have some additional outreach locations to let them know about what the community is doing in a silo that they may be using.

While I’ll syndicate articles of a general and how-to nature there, I’m likely to stay away from posting anything too developer-centric.

If you’d like to contribute to the magazine there are methods for syndicating content into it via POSSE, which I’d recommend if you’re able to do so. Otherwise they have some useful bookmarklets, browser extensions, and other manual methods that you can use to add articles to the magazine. Click this link to join as a contributor. For additional information see also Flipboard Tools.

View my Flipboard Magazine.

Syndicated copies to:

📅 Virtual Homebrew Website Club Meetup on July 25, 2018

Are you building your own website? Indie reader? Personal publishing web app? Or some other digital magic-cloud proxy? If so, come on by and join a gathering of people with likeminded interests. Bring your friends who want to start a personal web site. Exchange information, swap ideas, talk shop, help work on a project…

Everyone of every level is welcome to participate! Don’t have a domain yet? Come along and someone can help you get started and provide resources for creating the site you’ve always wanted.

This virtual HWC meeting is for site builders who either can’t make a regular in-person meeting or don’t yet have critical mass to host one in their area. It will be hosted on Google Hangouts.

Homebrew Website Club Meetup – Virtual Americas

Time:  to
Location: Google Hangouts

  • 6:30 – 7:30 pm (Pacific): (Optional) Quiet writing hour
    Use this time to work on your project, ask for help, chat, or do some writing before the meeting.
  • 7:30 – 9:00 pm (Pacific): Meetup

More Details

Join a community of like-minded people building and improving their personal websites. Invite friends that want a personal site.

  • Work with others to help motivate yourself to create the site you’ve always wanted to have.
  • Ask questions about things you may be stuck on–don’t let stumbling blocks get in the way of having the site you’d like to have.
  • Finish that website feature or blog post you’ve been working on
  • Burn down that old website and build something from scratch
  • Share what you’ve gotten working
  • Demos of recent breakthroughs

Skill levels: Beginner, Intermediate, Advanced

Any questions? Need help? Need more information? Ask in chat: http://indiewebcamp.com/irc/today#bottom

RSVP

Add your optional RSVP in the comments below; by adding your indie RSVP via webmention to this post; or by RSVPing to one of the syndicated posts below:
Indieweb.org event: https://indieweb.org/events/2018-07-25-homebrew-website-club#Virtual_Americas
Twitter “event”: https://twitter.com/ChrisAldrich/status/1020460581038391296

Syndicated copies to:

Creating a tag cloud directory for the Post Kinds Plugin on WordPress

Yesterday after discovering it on Xavier Roy’s site I was reminded that the Post Kinds Plugin is built on a custom taxonomy and, as a result, has the ability to output its taxonomy in typical WordPress Tag Cloud widget. I had previously been maintaining/displaying a separate category structure for Kinds, which I always thought was a bit much in my category area. While it’s personally nice to have the metadata, I didn’t like how it made the categories so overwhelming and somehow disjointed.

For others who haven’t realized the functionality is hiding in the Post Kinds Plugin, here are some quick instructions for enabling the tag cloud widget:

  1. In the administrative UI, go to Appearance » Widgets in the menu structure.
  2. Drag the “Tag Cloud” widget to one of your available sidebars, footers, headers or available widget areas.
  3. Give the widget a title. I chose “Post Kinds”.
  4. Under the “Taxonomy” heading choose Kinds.
  5. If you want to show tag counts for your kinds, then select the checkbox.
  6. If necessary, select visibility if you want to create conditions for which pages, posts, etc. where the widget will appear.
  7. Finally click save.

You’ll end up with something in your widget area that looks something like this (depending on which Kinds you have enabled and which options you chose):

The tag cloud for the Post Kinds plugin data

If you’re interested in changing or modifying the output or display of your tag cloud, you can do so with the documentation for the Function Reference wp tag cloud in the WordPress Codex.

Syndicated copies to:

IndieWeb Summit 2018 Recap

Last week was the 8th annual IndieWeb Summit held in Portland, Oregon. While IndieWeb Camps and Summits have traditionally been held on weekends during people’s free time, this one held in the middle of the week was a roaring success. With well over 50 people in attendance, this was almost certainly the largest attendance I’ve seen to date. I suspect since people who flew in for the event had really committed, the attendance on the second day was much higher than usual as well. It was great to see so many people hacking on their personal websites and tools to make their personal online experiences richer.

The year of the Indie Reader

Last year I wrote the post Feed Reader Revolution in response to an increasingly growing need I’ve seen in the social space for a new sort of functionality in feed readers. While there have been a few interesting attempts like Woodwind which have shown a proof-of-concept, not much work had been done until some initial work by Aaron Parecki and a session at last year’s IndieWeb Summit entitled Putting it all Together.

Over the past year I’ve been closely watching Aaron Parecki; Grant Richmond and Jonathan LaCour; Eddie Hinkle; and Kristof De Jaeger’s collective progress on the microsub specification as well as their respective projects Aperture/Monocle; Together; Indigenous/Indigenous for iOS; and Indigenous for Android. As a result in early May I was overjoyed to suggest a keynote session on readers and was stupefied this week as many of them have officially launched and are open to general registration as relatively solid beta web services.

I spent a few minutes in a session at the end of Tuesday and managed to log into Aperture and create an account (#16, though I suspect I may be one of the first to use it besides the initial group of five developers). I also managed to quickly and easily add a microsub endpoint to my website as well. Sadly I’ve got some tweaks to make to my own installation to properly log into any of the reader app front ends. Based on several of the demos I’ve seen over the past months, the functionality involved is not only impressive, but it’s a properly large step ahead of some of the basic user interface provided by the now-shuttered Woodwind.xyz service (though the code is still available for self-hosting.)

Several people have committed to make attempts at creating a microsub server including Jack Jamieson who has announced an attempt at creating one for WordPress after having recently built the Yarns reader for WordPress from scratch this past year. I suspect within the coming year we’ll see one or two additional servers as well as some additional reading front ends. In fact, Ryan Barrett spent the day on Wednesday hacking away at leveraging the News Blur API and leveraging it to make News Blur a front end for Aperture’s server functionality. I’m hoping others may do the same for other popular readers like Feedly or Inoreader to expand on the plurality of offerings. Increased competition for new reader offerings can only improve the entire space.

Even more reading related support

Just before the Summit, gRegor Morrill unveiled the beta version of his micropub client Indiebookclub.biz which allows one to log in with their own website and use it to post reading updates to their own website. For those who don’t yet support micropub, the service saves the data for eventual export. His work on it continued through the summit to continue to improve an already impressive product. It’s the fist micropub client of its kind amidst a growing field of websites (including WordPress and WithKnown which both have plugins) that offer reading post support. Micro.blog has recently updated its code to allow users of the platform the ability to post reads with indiebookclub.biz as well. As a result of this spurt of reading related support there’s now a draft proposal to add read-of and read-status support as new Microformats. Perhaps reads will be included in future updates of the post-type-discovery algorithm as well?

Given the growth of reading post support and a new micropub read client, I suspect it won’t take long before some of the new microsub-related readers begin supporting read post micropub functionality as well.

IndieAuth Servers

In addition to David Shanske’s recent valiant update to the IndieAuth plugin for WordPress, Manton Reece managed to finish up coding work to unveil another implementation of IndieAuth at the Summit. His version is for the micro.blog platform which is a significant addition to the community and will add several hundred additional users who will have broader access to a wide assortment of functionality as a result.

The Future

While work continues apace on a broad variety of fronts, I was happy to see that my proposal for a session on IndieAlgorithms was accepted (despite my leading another topic earlier in the day). It was well attended and sparked some interesting discussion about how individuals might also be able to exert greater control over what they’re presented to consume. With the rise of Indie feed readers this year, the ability to better control and filter one’s incoming content is going to take on a greater importance in the very near future. With an increasing number of readers to choose from, more people will hopefully be able to free themselves from the vagaries of the blackbox algorithms that drive content distribution and presentation in products like Facebook, Twitter, Instagram and others. Based on the architecture of servers like Aperture, perhaps we might be able to modify some of the microsub spec to allow more freedom and flexibility in what will assuredly be the next step in the evolution of the IndieWeb?

Diversity

While there are miles and miles to go before we sleep, I was happy to have seen a session on diversity pop up at the Summit. I hope we can all take the general topic to heart to be more inclusive and actively invite friends into our fold. Thanks to Jean for suggesting and guiding the conversation and everyone else for continuing it throughout the rest of the summit and beyond.

Other Highlights

Naturally, the above are just a few of the bigger highlights as I perceive them. I’m sure others will appear in the IndieNews feed or other blogposts about the summit. The IndieWeb is something subtly different to each person, so I hope everyone takes a moment to share (on your own sites naturally) what you got out of all the sessions and discussions. There was a tremendous amount of discussion, debate, and advancement of the state of the art of the continually growing IndieWeb. Fortunately almost all of it was captured in the IndieWeb chat, on Twitter, and on video available through either the IndieWeb wiki pages for the summit or directly from the IndieWeb YouTube channel.

I suspect David Shanske and I will have more to say in what is sure to be a recap episode in our next podcast.

Photos

Finally, below I’m including a bunch of photos I took over the course of my trip. I’m far from a professional photographer, but hopefully they’ll give a small representation of some of the fun we all had at camp.

Final Thanks

People

While I’m thinking about it, I wanted to take a moment to thank everyone who came to the summit. You all really made it a fantastic event!

I’d particularly like to thank Aaron Parecki, Tantek Çelik, gRegor Morrill, Marty McGuire, and David Shanske who did a lot of the organizing and volunteer work to help make the summit happen as well as to capture it so well for others to participate remotely or even view major portions of it after-the-fact. I would be remiss if I didn’t thank Martijn van der Ven for some herculean efforts on IRC/Chat in documenting things in real time as well as for some serious wiki gardening along the way. As always, there are a huge crew of others whose contributions large and small help to make up the rich fabric of the community and we wouldn’t be who we are without your help. Thank you all! (Or as I might say in chat: community++).

And finally, a special personal thanks to Greg McVerry for kindly letting me join him at the Hotel deLuxe for some late night discussions on the intersection of IndieWeb and Domain of One’s Own philosophies as they dovetail with the education sector.  With growing interest and a wealth of ideas in this area, I’m confident it’s going to be a rapidly growing one over the coming years.

Sponsors

I’d also like to take a moment to say thanks to all the sponsors who helped to make the event a success including Name.com, GoDaddy, Okta, Mozilla, DreamHost, and likely a few others who I’m missing at the moment.

I’d also like to thank the Eliot Center for letting us hosting the event at their fabulous facility.

Syndicated copies to: