Using Inoreader as an IndieWeb feed reader

It may still be a while before I can make the leap I’d love to make to using Microsub related technology to replace my daily feed reader habits. I know that several people are working diligently on a Microsub server for WordPress and there are already a handful of reader interfaces available. I’m particularly interested in the fact that I can use a reader interface integrated with Micropub so that my reactions in the reader (likes, bookmarks, replies, etc.) are posted back to my own personal website which will then send notifications (via Webmention) to the mentioned websites. Of course it’s going to take some time before I’m using it and even more time after that for the set up to become common and easy to use for others. So until then, I and others will need some tools to use right now.

Toward this end I thought I’d double down on my use of Inoreader in my daily web consumption workflows. I wanted to make it easier to use my feed reader to post all these types of posts to my website which will still handle the notifications. In some sense, instead of relying on a feed reader supporting Micropub, I’ll use other (older) methods for making the relevant posts. As I see it, there are two potential possibilities using Inoreader:
(1) using a service like IFTTT (free) or Zapier (paid) to take the post intents and send them to my WordPress site, or
(2) using the custom posting interface in Inoreader in conjunction with post editor URL schemes with the Post Kinds plugin to create the posts. Using WordPress’ built-in Post This bookmarklet schemes could also be used to make these posts, but Post Kinds plugin offers a lot more metadata and flexibility.

If This Then That (IFTTT)

Below is a brief outline of some of the IFTTT recipes I’ve used to take data from posts I interact with in Inoreader and post them to my own website.

The trigger interface in IFTTT for creating new applets using Inoreader functionality.

Likes

IFTTT has an explicit like functionality with a one click like button. There is an IFTTT recipe which allows taking this datum and adding it directly as a WordPress post with lots of rich data. The  “then that” portion of IFTTT using WordPress allows some reasonable functionality for porting over data.

Favorites

IFTTT also has explicit favorite functionality using a one click starred article button. There is an IFTTT recipe which allows adding this directly as a WordPress post.

Since the “starred” article isn’t defined specifically in Inoreader as a “favorite”, one could alternately use it to create “read” or “bookmark” posts on their WordPress websites. I’m tempted to try this for read posts as I probably wouldn’t often use it to create favorite posts on my own website. Ultimately one at least wants an easy-to-remember 1 to 1 mapping of pieces of functionality in Inoreader to their own website, so whatever I decide I’ll likely stick to it.

Bookmarks

While there is no specific functionality for creating bookmarks in Inoreader (though starred articles could be used this way as previously mentioned), there is a “saved webpage” functionality that could be used here in addition to an IFTTT recipe to port over the data to WordPress.

Reads

While Inoreader has a common feed reader read/unread functionality, it is often not used tacitly and this is a means of reducing friction within the application. Not really wanting to muddle the meaning of the “starred” article to do it, I’ve opted to adding an explicit “read” tag on posts I’ve read.

IFTTT does have a “New tagged article” recipe that will allow me to take articles in Inoreader with my “read” tag and post them to my website. It’s pretty simple and easy.

Replies

For dealing with replies, there is an odd quirk within Inoreader. Confoundingly the feed reader has two similar, yet still very different commenting functionalities. One is explicitly named “comment”, but sadly there isn’t a related IFTTT trigger nor an RSS feed to take advantage of the data one puts into the comment functionality. Fortunately there is a separate “broadcast” functionality. There is an IFTTT recipe for “new broadcasted article” that will allow one to take the reply/comment and post it to one’s WordPress website.

Follows

Like many of the above there is a specific IFTTT recipe that will allow one to add subscriptions directly to WordPress as posts, so that any new subscriptions (or follows) within the Inoreader interface can create follow posts! I doubt many people may use this recipe, but it’s awesome that it exists.  Currently anything added to my blogrolls (aka Following Page) gets ported over to Inoreader via OPML subscription, so I’m curious if them being added that way will create these follow posts? And if so, is there a good date/time stamp for these? I still have to do some experimenting to see exactly how this is going to work.

RSS feed-based functionality

In addition to the IFTTT recipe functionality described above, one could also use IFTTT RSS functionality to pipe RSS feeds which Inoreader provides (especially via tags) into a WordPress website. I don’t personally use this sort of set up, but thought I’d at least mention it in passing so that anyone who might like to create other post types to their website could.

Custom posting in Inoreader with Post Kinds Plugin

If using a third-party service like IFTTT isn’t your cup of tea, Inoreader also allows custom sharing options.  (There are also many pre-built ones for Facebook, Twitter, etc. and they’re also re-orderable as well.) I thus used WordPress’ post editor URL schemes to send the data I’d like to have from the original post to my own website. Inoreader actually has suggestions in their UI for how to effectuate this generically on WordPress. While this is nice, I’m a major user of the Post Kinds Plugin which allows me a lot more flexibility to post likes, bookmarks, favorites, reads, replies, etc. with the appropriate microformats and much richer metadata. Post Kinds has some additional URL structures which I’ve used in addition to the standard WordPress ones to take advantage of this. This has allowed me to create custom buttons for reads, bookmarks, replies, likes, and listens. With social sharing functionality in Inoreader enabled, each article in Inoreader has a sharing functionality in the bottom right corner that has a configuration option which brings up the following interface:

Custom sharing functionality in Inoreader. I’ve added set up to post reads, bookmarks, likes, replies and listens to my personal website.

Once made, these custom button icons appear at the bottom of every post in Inoreader, so, for example, if I want to reply to a post I’ve just read, I can click on the reply button which will open a new browser window for a new post on my website. The Post Kinds plugin on my site automatically pulls in the URL of the original post, parses that page and–where available–pulls in the title, synopsis, post date/time, the author, author URL, author photo, and a featured photo as well as automatically setting the specific post kind and post format. A lot of this data helps to create a useful reply context on my website. I can then type in my reply to the post and add any other categories, tags, or data I’d like in my admin interface. Finally I publish the post which sends notifications to the original post I read (via Webmention).

Screencapture of Inoreader’s interface highlighting some of their social features as well as the custom sharing interface I’ve added. The article shown here is one lamenting the lost infrastructure of feed readers and hopes for future infrastructure from Jon Udell entitled Where’s my Net dashboard?

Conclusion and future

With either of the above set ups, there are a few quick and easy clicks to create my posts and I’m done. Could it be simpler? Yes, but it likely won’t be much more until I’ve got a fully functional Microsub server and reader up and working.

Of course, I also love Inoreader and its huge variety of features and great usability. While I’m patiently awaiting having my own WordPress Microsub server, I certainly wouldn’t mind it if Inoreader decided to add some IndieWeb functionality itself. Then perhaps I wouldn’t need to make the switch in the near future.

What would this look like? It could include the ability to allow me to log into Inoreader using my own website using IndieAuth protocol. It could also add Micropub functionality to allow me to post all these things directly and explicitly to my website in an easier manner. And finally, if they really wanted to go even further, they could make themselves a Microsub server that enables me to use any one of several Microsub clients to read content and post to my own website. And of course the benefit to Inoreader is that if they support these open internet specifications, then their application not only works with WordPress sites with the few appropriate plugins, but Inoreader will also work with a huge variety of other content management systems that support these specs as well.

Whether or not Inoreader supports these protocols, there is a coming wave of new social feed readers that will begin to close many of these functional gaps that made RSS difficult. I know things will slowly, but eventually get better, simpler, and easier to use. Soon posting to one’s website and doing two way communication on the internet via truly social readers will be a reality, and one that’s likely to make it far easier to eschew the toxicity and problems of social sites like Facebook and Twitter.

 

 

Spoils from a cow party

A few months back, I got roped into joining in on a co-op purchase of a whole cow that was estimated to come in at around 600 pounds. It was grass fed, organically raised, and was to be humanely butchered, packaged, and frozen. I made an initial $200 deposit, and this morning I paid the $290.50 balance at what was billed as a “Cow Party”.

Thirteen of the “partners” got together at 11:30am to draw lots to form a line to take turns choosing individual cuts from the cow. Though it was just one entire cow, the butcher threw in some additional tongues, testicles, and other additional offal for us to select from as well.

Here’s what was included in my 18 turns:
1.2 lb New York steak, bone in
1.0 lb New York steak, bone in
1.2 New York steak, bone in
1.4 lb Ribeye, bone in
0.5 lb top sirloin steak
0.5 lb top sirloin steak
2.4 lb bottom round roast
2.5 lb beef tritip large
0.4 lb top sirloin steak
1.9 lb beef short ribs
1.1 lb stew beef
1.1 lb stew beef
1.1 lb stew beef
1.2 lb beef cheek
1.4 lb beef oxtails
0.9 lb beef oxtails
1.4 lb beef testicles
1.3 lb beef fat

Everyone also went home with an additional box of ground beef. Mine contained 16 packages of 1lb each of ground meat as well as 2.2lbs of beef ground with heart and 0.9lbs of beef ground with liver.

This comes out to 41.6 pounds of meat in all and price of roughly $11.78 per pound.

Sadly I missed out on some nice shoulder cuts, some tongue, and I had tried to get some tripe instead of the testicles, but alas, there were apparently other menudo fans in our group.

My freezer is chock full of some serious meat for a while. Most of the cuts are fairly straightforward and I’ve already got a good idea of what I’m going to do with them. I will have to take a peek at what I ought to do for the Rocky Mountain Oysters. I’m leaning toward turning them into some delicious tacos, but I’ll take any suggestions from those who’ve done other variations before.

I do want to make an inventory of the price per pound for individual cuts versus typical markets to see how the pricing works out, as I suspect that some likely did better than others within the “lottery” system this set up. The tough part will be finding local markets that purvey this high a quality of meat for a reasonable comparison.

Deplatforming and making the web a better place

I’ve spent some time this morning thinking about the deplatforming of the abhorrent social media site Gab.ai by Google, Apple, Stripe, PayPal, and Medium following the Tree of Life shooting in Pennsylvania. I’ve created a deplatforming page on the IndieWeb wiki with some initial background and history. I’ve also gone back and tagged (with “deplatforming”) a few articles I’ve read or podcasts I’ve listened to recently that may have some interesting bearing on the topic.

The particular design question I’m personally looking at is roughly:

How can we reshape the web and social media in a way that allows individuals and organizations a platform for their own free speech and communication without accelerating or amplifying the voices of the abhorrent fringes of people espousing broadly anti-social values like virulent discrimination, racism, fascism, etc.?

In some sense, the advertising driven social media sites like Facebook, Twitter, et al. have given the masses the equivalent of not simply a louder voice within their communities, but potential megaphones to audiences previously far, far beyond their reach. When monetized against the tremendous value of billions of clicks, there is almost no reason for these corporate giants to filter or moderate socially abhorrent content.  Their unfiltered and unregulated algorithms compound the issue from a societal perspective. I look at it in some sense as the equivalent of the advent of machine guns and ultimately nuclear weapons in 20th century warfare and their extreme effects on modern society.

The flip side of the coin is also potentially to allow users the ability to better control and/or filter out what they’re presented on platforms and thus consuming, so solutions can relate to both the output as well as the input stages.

Comments and additions to the page (or even here below) particularly with respect to positive framing and potential solutions on how to best approach this design hurdle for human communication are more than welcome.


Deplatforming

Deplatforming or no platform is a form of banning in which a person or organization is denied the use of a platform (physical or increasingly virtual) on which to speak.

In addition to the banning of those with socially unacceptable viewpoints, there has been a long history of marginalized voices (particularly trans, LGBTQ, sex workers, etc.) being deplatformed in systematic ways.

The banning can be from any of a variety of spaces ranging from physical meeting spaces or lectures, journalistic coverage in newspapers or television to domain name registration, web hosting, and even from specific social media platforms like Facebookor Twitter. Some have used these terms as narrowly as in relation to having their Twitter “verified” status removed.

“We need to puncture this myth that [deplatforming]’s only affecting far-right people. Trans rights activistsBlack Lives Matterorganizers, LGBTQI people have been demonetized or deranked. The reason we’re talking about far-right people is that they have coverage on Fox News and representatives in Congress holding hearings. They already have political power.” — Deplatforming Works: Alex Jones says getting banned by YouTube and Facebook will only make him stronger. The research says that’s not true. in Motherboard 2018-08-10

Examples

Glenn Beck

Glenn Beck parted ways with Fox News in what some consider to have been a network deplatforming. He ultimately moved to his own platform consisting of his own website.

Reddit Communities

Reddit has previously banned several communities on its platform. Many of the individual users decamped to Voat, which like Gab could potentially face its own subsequent deplatforming.

Milo Yiannopoulos

Milo Yiannopoulos, the former Breitbart personality, was permanently banned from Twitter in 2016 for inciting targeted harassment campaigns against actress Leslie Jones. He resigned from Breitbart over comments he made about pedophilia on a podcast. These also resulted in the termination of a book deal with Simon & Schuster as well as the cancellation of multiple speaking engagements at Universities.

The Daily Stormer

Neo-Nazi site The Daily Stormer was deplatformed by Cloudflare in the wake of 2017’s “Unite the Right” rally in Charlottesville. Following criticism, Matthew Prince, Cloudflare CEO, announced that he was ending the Daily Stormer’s relationship with Cloudflare, which provides services for protecting sites against distributed denial-of service (DDoS) attacks and maintaining their stability.

Alex Jones/Infowars

Alex Jones and his Infowars were deplatformed by Apple, Spotify, YouTube, and Facebook in late summer 2018 for his Network’s false claims about the Newtown shooting.

Gab

Gab.ai was deplatformed from PayPal, Stripe, Medium , Apple, and Google as a result of their providing a platform for alt-right and racist groups as well as the shooter in the Tree of Life Synagogue shooting in October 2018

Gab.com is under attack. We have been systematically no-platformed by App Stores, multiple hosting providers, and several payment processors. We have been smeared by the mainstream media for defending free expression and individual liberty for all people and for working with law enforcement to ensure that justice is served for the horrible atrocity committed in Pittsburgh. Gab will continue to fight for the fundamental human right to speak freely. As we transition to a new hosting provider Gab will be inaccessible for a period of time. We are working around the clock to get Gab.com back online. Thank you and remember to speak freely.

—from the Gab.ai homepage on 2018-10-29

History

Articles

Research

See Also

  • web hosting
  • why
  • shadow banning
  • NIPSA
  • demonitazition – a practice (particularly leveled at YouTube) of preventing users and voices from monetizing their channels. This can have a chilling effect on people who rely on traffic for income to support their work (see also 1)

Following local Altadena and Pasadena News

I’ve been thinking more about local news lately, so I’ve taken some time to aggregate some of my local news sources. While I live in the Los Angeles area, it’s not like I’m eschewing the Los Angeles Times, but I wanted to go even more uber-local than this. Thus I’m looking more closely at my local Altadena and Pasadena news outlets. I’m a bit surprised to see just how many small outlets and options I’ve got! People say local news is dying or dead, so I thought I would only find two or three options–how wrong could I have been?

In addition to some straightforward journalistic related news sources, I’ve also included some additional local flavor news which includes town councils, the chamber of commerce, historical societies, etc. which have websites that produce feeds with occasional news items.

Going forward you can see these sources aggregated on my following page.

For those who are interested I’ve created an OPML file which contains the RSS feeds of all these sources if they’d like to follow them as well. Naturally most have other social media presences, but there’s usually no guarantee that if you followed them that way that you’ll actually see the news you wanted.

If anyone is aware of other sources, I’m happy to add them to the list.

Here’s the initial list of sources:

An IndieWeb talk at WordCamp Riverside in November 2018

I’ve submitted a talk for WordCamp Riverside 2018; it has been accepted.

My talk will help to kick off the day at 10am on Saturday morning in the “John Hughes High” room. The details for the camp and a link to purchase tickets can be found below.

WordCamp Riverside 2018

&
hosted at SolarMax, 3080 12th St., Riverside, CA 92507
Tickets are available now

Given that “Looking back to go forward” is the theme of the camp this year, I think I may have chosen the perfect topic. To some extent I’m going to look at how the nascent web has recently continued evolving from where it left off around 2006 before everyone abandoned it to let corporate silo services like Facebook and Twitter become responsible for how we use the web. We’ll talk about how WordPress can be leveraged to do a better job than “traditional” social media with much greater flexibility.

Here’s the outline:

The web is my social network: How I use WordPress to create the social platform I want (and you can too!)

Synopsis: Growing toxicity on Twitter, Facebook’s Cambridge Analytica scandal, algorithmic feeds, and a myriad of other problems have opened our eyes to the ever-growing costs of social media. Walled gardens have trapped us with the promise of “free” while addicting us to their products at the cost of our happiness, sense of self, sanity, and privacy. Can we take back our fractured online identities, data, and privacy to regain what we’ve lost?

I’ll talk about how I’ve used IndieWeb philosophies and related technologies in conjunction with WordPress as a replacement for my social presence while still allowing easy interaction with friends, family, and colleagues online. I’ll show how everyone can easily use simple web standards to make WordPress a user-controlled, first-class social platform that works across domains and even other CMSes.

Let’s democratize social media using WordPress and the open web, the last social network you’ll ever need to join.

Intended Audience: The material is introductory in nature and targeted at beginner and intermediate WordPressers, but will provide a crash course on a variety of bleeding edge W3C specs and tools for developers and designers who want to delve into them at a deeper level. Applications for the concepts can be of valuable to bloggers, content creators, businesses, and those who are looking to better own their online content and identities online without allowing corporate interests out-sized influence of their online presence.

I look forward to seeing everyone there!

Extending a User Interface Idea for Social Reading Online

This morning I was reading an article online and I bookmarked it as “read” using the Reading.am browser extension which I use as part of my workflow of capturing all the things I’ve been reading on the internet. (You can find a feed of these posts here if you’d like to cyber-stalk most of my reading–I don’t post 100% of it publicly.)

I mention it because I was specifically intrigued by a small piece of excellent user interface and social graph data that Reading.am unearths for me. I’m including a quick screen capture to better illustrate the point. While the UI allows me to click yes/no (i.e. did I like it or not) or even share it to other networks, the thing I found most interesting was that it lists the other people using the service who have read the article as well. In this case it told me that my friend Jeremy Cherfas had read the article.1

Reading.am user interface indicating who else on the service has read an article.

In addition to having the immediate feedback that he’d read it, which is useful and thrilling in itself, it gives me the chance to search to see if he’s written any thoughts about it himself, and it also gives me the chance to tag him in a post about my own thoughts to start a direct conversation around a topic which I now know we’re both interested in at least reading about.2

The tougher follow up is: how could we create a decentralized method of doing this sort of workflow in a more IndieWeb way? It would be nice if my read posts on my site (and those of others) could be overlain on websites via a bookmarklet or other means as a social layer to create engaged discussion. Better would have been the ability to quickly surface his commentary, if any, on the piece as well–functionality which I think Reading.am also does, though I rarely ever see it. In some sense I would have come across Jeremy’s read post in his feed later this weekend, but it doesn’t provide the immediacy that this method did. I’ll also admit that I prefer having found out about his reading it only after I’d read it myself, but having his and others’ recommendations on a piece (by their explicit read posts) is a useful and worthwhile piece of data, particularly for pieces I might have otherwise passed over.

In some sense, some of this functionality isn’t too different from that provided by Hypothes.is, though that is hidden away within another browser extension layer and requires not only direct examination, but scanning for those whose identities I might recognize because Hypothes.is doesn’t have a specific following/follower social model to make my friends and colleagues a part of my social graph in that instance. The nice part of Hypothes.is’ browser extension is that it does add a small visual indicator to show that others have in fact read/annotated a particular site using the service.

A UI example of Hypothes.is functionality within the Chrome browser. The yellow highlighted browser extension bug indicates that others have annotated a document. Clicking the image will take one to the annotations in situ.

I’ve also previously documented on the IndieWeb wiki how WordPress.com (and WordPress.org with JetPack functionality) facepiles likes on content (typically underneath the content itself). This method doesn’t take things as far as the Reading.am case because it only shows a small fraction of the data, is much less useful, and is far less likely to unearth those in your social graph to make it useful to you, the reader.

WordPress.com facepiles likes on content which could surface some of this social reading data.

I seem to recall that Facebook has some similar functionality that is dependent upon how (and if) the publisher embeds Facebook into their site. I don’t think I’ve seen this sort of interface built into another service this way and certainly not front and center the way that Reading.am does it.

The closest thing I can think of to this type of functionality in the analog world was in my childhood when library card slips in books had the names of prior patrons on them when you signed your own name when checking out a book, though this also had the large world problem that WordPress likes have in that one typically wouldn’t have know many of the names of prior patrons necessarily. I suspect that the Robert Bork privacy incident along with the evolution of library databases and bar codes have caused this older system to disappear.

This general idea might make an interesting topic to explore at an upcoming IndieWebCamp if not before. The question is: how to add in the social graph aspect of reading to uncover this data? I’m also curious how it might or might not be worked into a feed reader or into microsub related technologies as well. Microsub clients or related browser extensions might make a great place to add this functionality as they would have the data about whom you’re already following (aka your social graph data) as well as access to their read/like/favorite posts. I know that some users have reported consuming feeds of friends’ reads, likes, favorites, and bookmarks as potential recommendations of things they might be interested in reading as well, so perhaps this would be an additional extension of that as well?


[1] I’ve certainly seen this functionality before, but most often the other readers are people I don’t know or know that well because the service isn’t huge and I’m not using it to follow a large number of other people.
[2] I knew he was generally interested already as I happen to be following this particular site at his prior recommendation, but the idea still illustrates the broader point.

Some ideas about tags, categories, and metadata for online commonplace books and search

Earlier this morning I was reading The Difference Between Good and Bad Tags and the discussion of topics versus objects got me thinking about semantics on my website in general.

People often ask why WordPress has both a Category and a Tag functionality, and to some extent it would seem to be just for this thing–differentiating between topics and objects–or at least it’s how I have used it and perceived others doing so as well. (Incidentally from a functionality perspective categories in the WordPress taxonomy also have a hierarchy while tags do not.) I find that I don’t always do a great job at differentiating between them nor do I do so cleanly every time. Typically it’s more apparent when I go searching for something and have a difficult time in finding it as a result. Usually the problem is getting back too many results instead of a smaller desired subset. In some sense I also look at categories as things which might be more interesting for others to subscribe to or follow via RSS from my site, though I also have RSS feeds for tags as well as for post types/kinds as well.

I also find that I have a subtle differentiation using singular versus plural tags which I think I’m generally using to differentiate between the idea of “mine” versus “others”. Thus the (singular) tag for “commonplace book” should be a reference to my particular commonplace book versus the (plural) tag “commonplace books” which I use to reference either the generic idea or the specific commonplace books of others. Sadly I don’t think I apply this “rule” consistently either, but hope to do so in the future.

I’ve also been playing around with some more technical tags like math.NT (standing for number theory), following the lead of arXiv.org. While I would generally have used a tag “number theory”, I’ve been toying around with the idea of using the math.XX format for more technical related research on my site and the more human readable “number theory” for the more generic popular press related material. I still have some more playing around with the idea to see what shakes out. I’ve noticed in passing that Terence Tao uses these same designations on his site, but he does them at the category level rather than the tag level.

Now that I’m several years into such a system, I should probably spend some time going back and broadening out the topic categories (I arbitrarily attempt to keep the list small–in part for public display/vanity reasons, but it’s relatively easy to limit what shows to the public in my category list view.) Then I ought to do a bit of clean up within the tags themselves which have gotten unwieldy and often have spelling mistakes which cause searches to potentially fail. I also find that some of my auto-tagging processes by importing tags from the original sources’ pages could be cleaned up as well, though those are generally stored in a different location on my website, so it’s not as big a deal to me.

Naturally I find myself also thinking about the ontogeny/phylogeny problems of how I do these things versus how others at large do them as well, so feel free to chime in with your ideas, especially if you take tags/categories for your commonplace book/website seriously. I’d like to ultimately circle back around on this with regard to the more generic tagging done from a web-standards perspective within the IndieWeb and Microformats communities. I notice almost immediately that the “tag” and “category” pages on the IndieWeb wiki redirect to the same page yet there are various microformats including u-tag-of and u-category which are related but have slightly different meanings on first blush. (There is in fact an example on the IndieWeb “tag” page which includes both of these classes neither of which seems to be counter-documented at the Microformats site.) I should also dig around to see what Kevin Marks or the crew at Technorati must surely have written a decade or more ago on the topic.


cc: Greg McVerry, Aaron Davis, Ian O’Byrne, Kathleen Fitzpatrick, Jeremy Cherfas

Refback from IndieWeb Chat

It took me a moment to realize what it was exactly since I hadn’t yet added a field to indicate it, but since the IndieWeb chat doesn’t send webmentions by itself, I’m glad I support refbacks to be aware of comments on my posts. The avatar didn’t come through quite like it should, but it’s nice to be able to treat refbacks like any other type of mention.

Thanks David Shanske for the Refbacks plugin. Thanks Tantek for what I think is my first incoming “mention” from chat.

The chat has some reasonable microformats markup, so I suppose the parser could do a more solid job, but this is a pretty great start. Sadly, Refback isn’t as real-time as Webmention, but it’s better than nothing.

My first mention (aka refback) from the IndieWeb chat. Click on the photo to see the UI display on my site.

I suppose we could all be posting chats on our own sites and syndicating into places like IRC to own our two directional conversations, but until I get around to the other half… (or at least for WordPress, I recall having gotten syndication to IRC for WithKnown working a while back via plugin.)

Gems And Astonishments of Mathematics: Past and Present—Lecture One

Last night was the first lecture of Dr. Miller’s Gems And Astonishments of Mathematics: Past and Present class at UCLA Extension. There are a good 15 or so people in the class, so there’s still room (and time) to register if you’re interested. While Dr. Miller typically lectures on one broad topic for a quarter (or sometimes two) in which the treatment continually builds heavy complexity over time, this class will cover 1-2 much smaller particular mathematical problems each week. Thus week 11 won’t rely on knowing all the material from the prior weeks, which may make things easier for some who are overly busy. If you have the time on Tuesday nights and are interested in math or love solving problems, this is an excellent class to consider. If you’re unsure, stop by one of the first lectures on Tuesday nights from 7-10 to check them out before registering.

Lecture notes

For those who may have missed last night’s first lecture, I’m linking to a Livescribe PDF document which includes the written notes as well as the accompanying audio from the lecture. If you view it in Acrobat Reader version X (or higher), you should be able to access the audio portion of the lecture and experience it in real time almost as if you had been present in person. (Instructions for using Livescribe PDF documents.)

We’ve covered the following topics:

  • Class Introduction
  • Erdős Discrepancy Problem
    • n-cubes
    • Hilbert’s Cube Lemma (1892)
    • Schur (1916)
    • Van der Waerden (1927)
  • Sylvester’s Line Problem (partial coverage to be finished in the next lecture)
    • Ramsey Theory
    • Erdős (1943)
    • Gallai (1944)
    • Steinberg’s alternate (1944)
    • DeBruijn and Erdős (1948)
    • Motzkin (1951)
    • Dirac (1951)
    • Kelly & Moser (1958)
    • Tao-Green Proof
  • Homework 1 (homeworks are generally not graded)

Over the coming days and months, I’ll likely bookmark some related papers and research on these and other topics in the class using the class identifier MATHX451.44 as a tag in addition to topic specific tags.

Course Description

Mathematics has evolved over the centuries not only by building on the work of past generations, but also through unforeseen discoveries or conjectures that continue to tantalize, bewilder, and engage academics and the public alike. This course, the first in a two-quarter sequence, is a survey of about two dozen problems—some dating back 400 years, but all readily stated and understood—that either remain unsolved or have been settled in fairly recent times. Each of them, aside from presenting its own intrigue, has led to the development of novel mathematical approaches to problem solving. Topics to be discussed include (Google away!): Conway’s Look and Say Sequences, Kepler’s Conjecture, Szilassi’s Polyhedron, the ABC Conjecture, Benford’s Law, Hadamard’s Conjecture, Parrondo’s Paradox, and the Collatz Conjecture. The course should appeal to devotees of mathematical reasoning and those wishing to keep abreast of recent and continuing mathematical developments.

Suggested Prerequisites

Some exposure to advanced mathematical methods, particularly those pertaining to number theory and matrix theory. Most in the class are taking the course for “fun” and the enjoyment of learning, so there is a huge breadth of mathematical abilities represented–don’t not take the course because you feel you’ll get lost.

Register now

I’ve written some general thoughts, hints, and tips on these courses in the past.

Renovated Classrooms

I’d complained to the UCLA administration before about how dirty the windows were in the Math Sciences Building, but they went even further than I expected in fixing the problem. Not only did they clean the windows they put in new flooring, brand new modern chairs, wood paneling on the walls, new projection, and new white boards! I particularly love the new swivel chairs, and it’s nice to have such a lovely new environment in which to study math.

The newly renovated classroom space in UCLA’s Math Sciences Building

Category Theory for Winter 2019

As I mentioned the other day, Dr. Miller has also announced (and reiterated last night) that he’ll be teaching a course on the topic of Category Theory for the Winter quarter coming up. Thus if you’re interested in abstract mathematics or areas of computer programming that use it, start getting ready!

The Sixth “R” of Open Educational Resources

The 5 R’s

I’ve seen the five R’s used many times in reference to the OER space (Open Educational Resources). They include the ability to allow others to: Retain, Reuse, Revise, Remix and/or Redistribute content with the appropriate use of licenses. These are all some incredibly powerful building blocks, but I feel like one particularly important building block is missing–that of the ability to allow easy accretion of knowledge over time.

Version Control

Some in the educational community may not be aware of some of the more technical communities that use the idea of version control for their daily work. The concept of version control is relatively simple and there are a multitude of platforms and software to effectuate it including Git, GitHub, GitLab, BitBucket, SVN, etc. In the old days of file and document maintenance one might save different versions of the same general file with increasingly different and complex names to their computer hard drive: Syllabus.doc, Syllabus_revised.doc, Syllabus_revisedagain.doc, Syllabus_Final.doc, Syllabus_Final_Final.doc, etc. and by using either the names or date and timestamps on the file one might try to puzzle out which one was the correct version of the file that they were working on.

For the better part of a decade now there is what is known as version control software to allow people to more easily maintain a single version of their particular document but with a timestamped list of changes kept internally to allow users to create new updates or roll back to older versions of work they’ve done. While the programs themselves are internally complicated, the user interfaces are typically relatively easy to use and in less than a day one can master most of their functionality. Most importantly, these version control systems allow many people to work on the same file or resource at a time! This means that 10 or more people can be working on a textbook, for example, at the same. They create a fork  or clone of the particular project to their personal work space where they work on it and periodically save their changes. Then they can push their changes back to the original or master where they can be merged back in to make a better overall project. If there are conflicts between changes, these can be relatively easily settled without much loss of time. (For those looking for additional details, I’ve previously written Git and Version Control for Novelists, Screenwriters, Academics, and the General Public, which contains a variety of detail and resources.) Version control should be a basic tool of every educators’ digital literacy toolbox.

For the OER community, version control can add an additional level of power and capability to their particular resources. While some resources may be highly customized or single use resources, many of them, including documents like textbooks can benefit from the work of many hands in an accretive manner. If these resources are maintained in version controllable repositories then individuals can use the original 5 R’s to create their particular content.

But what if a teacher were to add several new and useful chapters to an open textbook? While it may be directly useful to their specific class, perhaps it’s also incredibly useful to the broader range of teachers and students who might use the original source in the future? If the teacher who forks the original source has a means of pushing their similarly licensed content back to the original in an easy manner, then not only will their specific class benefit from the change(s), but all future classes that might use the original source will have the benefit as well!

If you’re not sold on the value of version control, I’ll mention briefly that Microsoft spent $7.5 Billion over the summer to acquire GitHub, which is one of the most popular version control and collaboration tools on the market. Given Microsofts’ push into the open space over the past several years, this certainly bodes well for both open as well as version control for years to come.

Examples

A Math Text

As a simple example, lets say that one professor writes the bulk of a mathematics text, but twenty colleagues all contribute handfuls of particular examples or exercises over time. Instead of individually hosting those exercises on their own sites or within their individual LMSes where they’re unlikely to be easy to find for other adopters of the text, why not submit the changes back to the original to allow more options and flexibility to future teachers? Massive banks of problems will allow more flexibility for both teachers and students. Even if the additional problems aren’t maintained in the original text source, they’ll be easily accessible as adjunct materials for future adopters.

Wikipedia

One of the most powerful examples of the value of accretion in this manner is Wikipedia. While it’s somewhat different in form than some of the version control systems mentioned above, Wikipedia (and most wikis for that matter) have built in history views that allow users to see and track the trail of updates and changes over time. The Wikipedia in use today is vastly larger and more valuable today than it was on its first birthday because it allows ongoing edits to be not only improved over time, but those improvements are logged and view-able in a version controlled manner.

Google Documents

This is another example of an extensible OER platform that allows simple accretion. With the correct settings on a document, one can host an original and allow it to be available to others who can save it to their own Google Drive or other spaces. Leaving the ability for guests to suggest changes or to edit a document allows it to potentially become better over time without decreasing the value of the original 5 Rs.

Webmentions for Update Notifications

As many open educational resources are hosted online for easy retention, reuse, revision, remixing, and/or redistribution, keeping them updated with potential changes can potentially be a difficult proposition. It may not always be the case that resources are maintained on a single platform like GitHub or that users of these resources will necessarily know how to use these platforms or their functionality. As a potential “fix” I can easily see a means of leveraging the W3C recommended specification for Webmention as a means of keeping a tally of changes to resources online.

Let’s say Robin keeps a copy of her OER textbook on her WordPress website where students and other educators can easily download and utilize it. More often than not, those using it are quite likely to host changed versions of it online as well. If their CMS supports the Webmention spec like WordPress does via a simple plugin, then by providing a simple URL link as a means of crediting the original source, which they’re very likely to do as required by the Creative Commons license anyway, their site will send a notification of the copy’s existence to the original. The original can then display the webmentions as traditional comments and thus provide links to the chain of branches of copies which both the original creator as well as future users can follow to find individual changes. If nothing else, the use of Webmention will provide some direct feedback to the original author(s) to indicate their materials are being used. Commonly used education facing platforms like WordPress, Drupal, WithKnown, Grav, and many others either support the Webmention spec natively or do so with very simple plugins.

Editorial Oversight

One of the issues some may see with pushing updates back to an original surrounds potential resource bloat or lack of editorial oversight. This is a common question or issue on open source version control repositories already, so there is a long and broad history of for how these things are maintained or managed in cases where there is community disagreement, an original source’s maintainer dies, disappears, loses interest, or simply no longer maintains the original. In the end, as a community of educators we owe it to ourselves and future colleagues to make an attempt at better maintaining, archiving, and allowing our work to accrete value over time.

The 6th R: Request Update

In summation, I’d like to request that we all start talking about the 6 R’s which include the current 5 along with the addition of a Request update (or maybe pull Request, Recompile, or Report to keep it in the R family?) ability as well. OER is an incredibly powerful concept already, but could be even more so with the ability to push new updates or at least notifications of them back to the original. Having the ability to do this will make it far easier to spread and grow the value of the OER concept as well as to disrupt the education spaces OER was evolved to improve.

Featured photo by Amador Loureiro on Unsplash

Our Daily Bread — A short 30 day podcast history of wheat and bread in very short episodes

Drop what you’re doing and immediately go out to subscribe to Our Daily Bread: A history of wheat and bread in very short episodes!

Subscribe: Android | Google Podcasts | RSS | More

The illustrious and inimitable Jeremy Cherfas is producing a whole new form of beauty by talking about wheat and bread in a podcast for thirty days.

It’s bundled up as part of his longer-running Eat This Podcast series, which I’ve been savoring for years.

Now that you’re subscribed and your life will certainly be immeasurably better, a few thoughts about how awesome this all is…

Last December I excitedly ran across the all-too-well-funded podcast Modernist Breadcrumbs. While interesting and vaguely entertaining, it was an attempt to be a paean to bread while subtly masking the fact that it was an extended commercial for the book series Modernist Bread by Nathan Myhrvold and Francisco Migoya which had been released the month prior.

I trudged through the entire series (often listening at 1.5-2x speed) to pick up the worthwhile tidbits, but mostly being disappointed. As I finished listening to the series, I commented:

Too often I found myself wishing that Jeremy Cherfas had been picked up to give the subject a proper 10+ episode treatment. I suspect he’d have done a more interesting in-depth bunch of interviews and managed to weave a more coherent story out of the whole. Alas, twas never thus.

A bit later Jeremy took the time to respond to my comment:

I’ve no idea how the series actually came about, or what anyone aside from Chris really thought about it. It would be nice to see any kind of listener engagement, but it’s hard to find anything. There are three tweets over the entire series that use the show’s official tags.

Still, what’s done is done, and I doubt anyone would want to support another series all about bread. Or would they … ?

I’ll admit I did spend a few long and desperate weeks salivating with \hope over that ominously hanging “Or would they…?” statement. Ultimately, I let it pass distracted by listening to Jeremy’s regular Eat This Podcast episodes. Then this past week I’ve been bowled over by discovering what has obviously been fermenting since.

I’d love to take credit for “planting the seed” as it were for this new endeavour, but I suspect that the thousands upon thousands of adoring listening fans that Mssr. Cherfas’ podcast has, he’s heard dozens of similar requests every day over the years. Even more likely, it’s his very own love of bread that spawned the urge (he does, after all, have a bread blog named Fornacalia!), but I’ll quietly bask as if I had my very own personal suggestion box to have a first-class production staff at my beck and call to make me custom podcast content about food, science, and culture.

It’s always amazing to me how scintillating Jeremy Cherfas’ work manages to be in these. What is not to love about his editorial eye, interview skills, his writing, his production abilities? I’m ever astounded by the fact that his work is a simple one man show and not a 20 person production team.

I’m waiting for the day that the Food Network, The Cooking Channel, HGTV, or a network of their stripe (or perhaps NPR or PBS) discovers his supreme talent and steals him away from us to better fund and extend the reach of the culinary talent and story-telling he’s been churning out flawlessly for years now. (I’m selfishly hoping one of them snaps him up before some other smart, well-funded corporation steals him away from us for his spectacular communication abilities to dominate all his free time away from these food-related endeavors.)

Of course, if you’re a bit paranoid like me, perhaps you’d find his fantastic work is a worthwhile cause to donate to? Supporting his work means there’s more for everyone.

Now, to spend a moment writing up a few award nominations… perhaps the Beard first?

 

Think the unthinkable: My Version for the Future of Digital Teaching and Learning for EDU522

I’m still evolving what my version of the future of digital teaching and learning looks like, but I am certainly enamored of the idea of mixing in many ideas of the open internet and IndieWeb ways of approaching it all. Small, open, relatively standardized, and remixable pieces can hopefully help lower barriers to teachers and learners everywhere.

The ability to interact directly with a course website and the materials in a course using my own webspace/digital commonplace book via Webmention seems like a very powerful tool. I’m able to own/archive many or most of the course materials for later use and reflection. I’m also able to own all of my own work for later review or potential interaction with fellow classmates or the teacher. Having an easier ability to search my site for related materials to draw upon for use in synthesizing and creating new content, also owned on my own site, is particularly powerful.

Certainly there are some drawbacks and potential inequalities in a web-based approach, particularly for those who don’t have the immediate resources required to access materials, host their own site, own their own data, or even interact digitally. William Gibson has famously said, “The future is already here — it’s just not very evenly distributed.” Hopefully breaking down some of the barriers to accessibility in education for all will help the distribution.

There’s also questions relating to how open should things really be? How private (or not) should they be? Ideally teachers provide a large swath of openness, particularly when it comes to putting their materials in the commons for others to reuse or remix. Meanwhile allowing students to be a bit more closed if they choose to keep materials just for their own uses, to limit access to their own work/thoughts, or to potentially limit the audience of their work (eg. to teachers and fellow classmates) is a good idea. Recent examples within the social media sphere related to context collapse have provided us with valuable lessons about how long things should last, who should own them, how public should they be in the digital sphere? Students shouldn’t be penalized in the future for ideas they “tried on” while learning. Having the freedom and safety to make mistakes in a smaller arena can be a useful tool within teaching–those mistakes shouldn’t cost them again by being public at a later date. Some within the IndieWeb have already started experimenting with private webmentions and other useful tools like limiting audiences which may help these ideas along despite their not existing in a simple implementation for the masses yet.

Naturally the open web can be a huge place, so having some control and direction is always nice. I’ve always thought students should be given a bit more control over where they’re going and what they want out of a given course as well as the ability to choose their own course materials to some extent. Still having some semblance of outline/syllabus and course guidelines can help direct what that learning will actually be.

Some of what I see in EDU522 is the beginning of the openness and communication I’ve always wanted to see in education and pedagogy. Hopefully it will stand as an example for others who come after us.

Written with Module One: Who Am I? in mind.

IndieWeb technology for online pedagogy

An ongoing case study

Very slick! Greg McVerry, a professor, can post all of the readings, assignments, etc. for his EDU522 online course on his own website, and I can indicate that I’ve read the pieces, watched the videos, or post my responses to assignments and other classwork (as well as to fellow classmates’ work and questions) on my own website while sending notifications via Webmention of all of the above to the original posts on their sites.

When I’m done with the course I’ll have my own archive of everything I did for the entire course (as well as copies on the Internet Archive, since I ping it as I go). His class website and my responses there could be used for the purposes of grading.

I can subscribe to his feed of posts for the class (or an aggregated one he’s made–sometimes known as a planet) and use the feed reader of choice to consume the content (and that of my peers’) at my own pace to work my way through the course.

This is a lot closer to what I think online pedagogy or even the use of a Domain of One’s Own in an educational setting could and should be. I hope other educators might follow suit based on our examples. As an added bonus, if you’d like to try it out, Greg’s three week course is, in fact, an open course for using IndieWeb and DoOO technologies for teaching. It’s just started, so I hope more will join us.

He’s focusing primarily on using WordPress as the platform of choice in the course, but one could just as easily use other Webmention enabled CMSes like WithKnown, Grav, Perch, Drupal, et al. to participate.

An IndieWeb Magazine on Flipboard

This morning I set up an IndieWeb Magazine on Flipboard. While it is “yet another silo”, it’s one that I can easily and automatically syndicate content from my site (and others) into. I’ve already seeded it with some recent posts for those who’d like to start reading.

Until more tools and platforms like micro.blog exist to make it easy for other Generation 2+ people to join the IndieWeb, I thought it made at least some sense to have some additional outreach locations to let them know about what the community is doing in a silo that they may be using.

While I’ll syndicate articles of a general and how-to nature there, I’m likely to stay away from posting anything too developer-centric.

If you’d like to contribute to the magazine there are methods for syndicating content into it via POSSE, which I’d recommend if you’re able to do so. Otherwise they have some useful bookmarklets, browser extensions, and other manual methods that you can use to add articles to the magazine. Click this link to join as a contributor. For additional information see also Flipboard Tools.

View my Flipboard Magazine.

📅 Virtual Homebrew Website Club Meetup on July 25, 2018

Are you building your own website? Indie reader? Personal publishing web app? Or some other digital magic-cloud proxy? If so, come on by and join a gathering of people with likeminded interests. Bring your friends who want to start a personal web site. Exchange information, swap ideas, talk shop, help work on a project…

Everyone of every level is welcome to participate! Don’t have a domain yet? Come along and someone can help you get started and provide resources for creating the site you’ve always wanted.

This virtual HWC meeting is for site builders who either can’t make a regular in-person meeting or don’t yet have critical mass to host one in their area. It will be hosted on Google Hangouts.

Homebrew Website Club Meetup – Virtual Americas

Time:  to
Location: Google Hangouts

  • 6:30 – 7:30 pm (Pacific): (Optional) Quiet writing hour
    Use this time to work on your project, ask for help, chat, or do some writing before the meeting.
  • 7:30 – 9:00 pm (Pacific): Meetup

More Details

Join a community of like-minded people building and improving their personal websites. Invite friends that want a personal site.

  • Work with others to help motivate yourself to create the site you’ve always wanted to have.
  • Ask questions about things you may be stuck on–don’t let stumbling blocks get in the way of having the site you’d like to have.
  • Finish that website feature or blog post you’ve been working on
  • Burn down that old website and build something from scratch
  • Share what you’ve gotten working
  • Demos of recent breakthroughs

Skill levels: Beginner, Intermediate, Advanced

Any questions? Need help? Need more information? Ask in chat: http://indiewebcamp.com/irc/today#bottom

RSVP

Add your optional RSVP in the comments below; by adding your indie RSVP via webmention to this post; or by RSVPing to one of the syndicated posts below:
Indieweb.org event: https://indieweb.org/events/2018-07-25-homebrew-website-club#Virtual_Americas
Twitter “event”: https://twitter.com/ChrisAldrich/status/1020460581038391296