Not a day goes by that I don’t run across a fantastic blog built or hosted on WordPress that looks gorgeous–they do an excellent job of making this pretty easy to accomplish.
but…
Invariably the blog’s author has a generic avatar (blech!) instead of a nice, warm and humanizing photo of their lovely face.
Or, perhaps, as a user, you’ve always wondered how some people qualified to have their photo included with their comment while you were left as an anonymous looking “mystery person” or a randomized identicon, monster, or even an 8-bit pixelated blob? The secret the others know will be revealed momentarily.
Which would you prefer?
Identicon: A face only the internet could loveChris: a face only a mother could loveAn example of a fantastic blog covering the publishing space, yet after 11,476 articles, the author can’t get his photo to show up.
Somehow, knowing how to replace that dreadful randomized block with an actual photo is too hard or too complicated. Why? In part, it’s because WordPress separated out this functionality as a decentralized service called Gravatar, which stands for Globally Recognized Avatar. In some sense this is an awesome idea because then people everywhere (and not just on WordPress) can use the Gravatar service to change their photo across thousands of websites at once. Unfortunately it’s not always clear that one needs to add their name, email address, and photo to Gravatar in order for the avatars to be populated properly on WordPress related sites.
(Suggestion for WordPress: Maybe the UI within the user account section could include a line about Gravatars?)
So instead of trying to write out the details for the third time this week, I thought I’d write it once here with a bit more detail and then point people to it for the future.
Another quick example
Can you guess which user is the blog’s author in the screencapture?
The correct answer is Anand Sarwate, the second commenter in the list. While Anand’s avatar seems almost custom made for a blog on randomness and information theory, it would be more inviting if he used a photo instead.
How to fix the default avatar problem
What is Gravatar?
Your Gravatar is an image that follows you from site to site appearing beside your name when you do things like comment or post on a blog. Avatars help identify your posts on blogs and web forums, so why not on any site?
Need some additional motivation? Watch this short video:
[wpvideo HNyK67JS]
Step 1: Get a Gravatar Account
If you’ve already got a WordPress.com account, this step is easy. Because the same corporate parent built both WordPress and Gravatar, if you have an account on one, you automattically have an account on the other which uses the same login information. You just need to log into Gravatar.com with your WordPress username and password.
If you don’t have a WordPress.com account or even a blog, but just want your photo to show up when you comment on WordPress and other Gravatar enabled blogs, then just sign up for an account at Gravatar.com. When you comment on a blog, it’ll ask for your email address and it will use that to pull in the photo to which it’s linked.
Step 2: Add an email address
Log into your Gravatar account. Choose an email address you want to modify: you’ll have at least the default you signed up with or you can add additional email addresses.
Step 3: Add a photo to go with that email address
Upload as many photos as you’d like into the account. Then for each of the email addresses you’ve got, associate each one with at least one of your photos.
Example: In the commenters’ avatars shown above, Anand was almost there. He already had a Gravatar account, he just hadn’t added any photos.
Step 4: Fill out the rest of your social profile
Optionally you can additional social details like a short bio, your other social media presences, and even one or more websites or blogs that you own.
Step 5: Repeat
You can add as many emails and photos as you’d like. By linking different photos to different email addresses, you’ll be able to change your photo identity based on the email “key” you plug into sites later.
If you get tired of one photo, just upload another and make it the default photo for the email addresses you want it to change for. All sites using Gravatar will update your avatar for use in the future.
Step 6: Use your email address on your WordPress account
WordPress screenshot of admin panel for user information.
In the field for the email, input (one of) the email(s) you used in Gravatar that’s linked to a photo.
Don’t worry, the system won’t show your email and it will remain private–WordPress and Gravatar simply use it as a common “key” to serve up the right photo and metadata from Gravatar to the WordPress site.
Once you’ve clicked save, your new avatar should show up in the list of users. More importantly it’ll now show up in all of the WordPress elements (like most author bio blocks and in comments) that appear on your site.
Administrator Caveats
WordPress themes need to be Gravatar enabled to be able to use this functionality, but in practice, most of them do, particularly for comments sections. If yours isn’t, then you can usually add it with some simple code.
In the WordPress admin interface one can go to Settings>>Discussion and enable View people's profiles when you mouse over their Gravatars under the heading “Gravatar Hovercards” to enable people to see more information about you and the commenters on your blog (presuming the comment section of your theme is Gravatar enabled.)
Some WordPress users often have several user accounts that they use to administer their site. One might have a secure administrator account they only use for updates and upgrades, another personal account (author/editor admin level account which uses their name) for authoring posts, and another (author/editor admin level) account for making admin notice posts or commenting as a generic moderator. In these cases, you need to make sure that each of these accounts has an email address with an an associated Gravatar account with the same email and the desired photo linked to it. (One Gravatar account with multiple emails/photos will usually suffice, though they could be different.)
Example: In Nate’s case above, we showed that his photo didn’t show in the author bio box, and it doesn’t show up in some comments, but it does show up in other comments on his blog. This is because he uses at least two different user accounts: one for authoring posts and another for commenting. The user account he uses for some commenting has a linked Gravatar account with email and photo and the other does not.
One account doesn’t have a Gravatar with a linked email and photo.
Want more information on how you can better own and manage your online identity? Visit IndieWeb.org: “A people-focused alternative to the ‘corporate web’.”
TL;DR
To help beautify your web presence a bit, if you notice that your photo doesn’t show up in the author block or comments in your theme, you can (create and) use your WordPress.com username/password in an account on their sister site Gravatar.com. Uploading your preferred photo on Gravatar and linking it to an email will help to automatically populate your photo in both your site and other WordPress sites (in comments) across the web. To make it work on your site, just go to your user profile in your WordPress install and use the same email address in your user profile as your Gravatar account and the decentralized system will port your picture across automatically. If necessary, you can use multiple photos and multiple linked email addresses in your Gravatar account to vary your photos.
For several years now, I’ve been meaning to do something more interesting with the notes, highlights, and marginalia from the various books I read. In particular, I’ve specifically been meaning to do it for the non-fiction I read for research, and even more so for e-books, which tend to have slightly more extract-able notes given their electronic nature. This fits in to the way in which I use this site as a commonplace book as well as the IndieWeb philosophy to own all of one’s own data.[1]
Over the past month or so, I’ve been experimenting with some fiction to see what works and what doesn’t in terms of a workflow for status updates around reading books, writing book reviews, and then extracting and depositing notes, highlights, and marginalia online. I’ve now got a relatively quick and painless workflow for exporting the book related data from my Amazon Kindle and importing it into the site with some modest markup and CSS for display. I’m sure the workflow will continue to evolve (and further automate) somewhat over the coming months, but I’m reasonably happy with where things stand.
The fact that the Amazon Kindle allows for relatively easy highlighting and annotation in e-books is excellent, but having the ability to sync to a laptop and do a one click export of all of that data, is incredibly helpful. Adding some simple CSS to the pre-formatted output gives me a reasonable base upon which to build for future writing/thinking about the material. In experimenting, I’m also coming to realize that simply owning the data isn’t enough, but now I’m driven to help make that data more directly useful to me and potentially to others.
As part of my experimenting, I’ve just uploaded some notes, highlights, and annotations for David Christian’s excellent text Maps of Time: An Introduction to Big History[2] which I read back in 2011/12. While I’ve read several of the references which I marked up in that text, I’ll have to continue evolving a workflow for doing all the related follow up (and further thinking and writing) on the reading I’ve done in the past.
I’m still reminded me of Rick Kurtzman’s sage advice to me when I was a young pisher at CAA in 1999: “If you read a script and don’t tell anyone about it, you shouldn’t have wasted the time having read it in the first place.” His point was that if you don’t try to pass along the knowledge you found by reading, you may as well give up. Even if the thing was terrible, at least say that as a minimum. In a digitally connected era, we no longer need to rely on nearly illegible scrawl in the margins to pollinate the world at a snail’s pace.[4] Take those notes, marginalia, highlights, and meta data and release it into the world. The fact that this dovetails perfectly with Cesar Hidalgo’s thesis in Why Information Grows: The Evolution of Order, from Atoms to Economies,[3] furthers my belief in having a better process for what I’m attempting here.
Hopefully in the coming months, I’ll be able to add similar data to several other books I’ve read and reviewed here on the site.
If anyone has any thoughts, tips, tricks for creating/automating this type of workflow/presentation, I’d love to hear them in the comments!
There is a relatively new candidate recommendation from the W3C for a game changing social web specification called Webmention which essentially makes it possible to do Twitter-like @mentions (or Medium-style) across the internet from site to site (as opposed to simply within a siloed site/walled garden like Twitter).
Webmentions would allow me to write a comment to someone else’s post on my own Tumblr site, for example, and then with a URL of the site I’m replying to in my post which serves as the @mention, the other site (which could be on WordPress, Drupal, Tumblr, or anything really) which also supports Webmentions could receive my comment and display it in their comment section.
Given the tremendous number of sites (and multi-platform sites) on which Disqus operates, it would be an excellent candidate to support the Webmention spec to allow a huge amount of inter-site activity on the internet. First it could include the snippet of code for allowing the site on which a comment is originally written to send Webmentions and secondly, it could allow for the snippet of code which allows for receiving Webmentions. The current Disqus infrastructure could also serve to reduce spam and display those comments in a pretty way. Naturally Disqus could continue to serve the same social functionality it has in the past.
Aggregating the conversation across the Internet into one place
Making things even more useful, there’s currently a third party free service called Brid.gy which uses open APIs of Twitter, Facebook, Instagram, Google+, and Flickr to bootstrap them to send these Webmentions or inter-site @mentions. What does this mean? After signing up at Bridgy, it means I could potentially create a post on my Disqus-enabled Tumblr (WordPress, or other powered site), share that post with its URL to Facebook, and any comments or likes made on the Facebook post will be sent as Webmentions to the comments section on my Tumblr site as if they’d been made there natively. (Disqus could add the metadata to indicate the permalink and location of where the comment originated.) This means I can receive comments on my blog/site from Twitter, Facebook, Instagram, G+, etc. without a huge amount of overhead, and even better, instead of being spread out in multiple different places, the conversation around my original piece of content could be conglomerated with the original!
Comments could be displayed inline naturally, and likes could be implemented as UI facepile either above or below the typical comment section. By enabling the sending/receiving of Webmentions, Disqus could further corner the market on comments. Even easier for Disqus, a lot of the code has already been written and is open source .
Web 3.0?
I believe that Webmention, when implemented, is going to cause a major sea-change in the way people use the web. Dare I say Web3.0?!
Over the years I almost feel like I’ve tried to max out the number of web services I could sign up for. I was always on the look out for that new killer app or social service, so I’ve tried almost all of them at one point or another. That I can remember, I’ve had at least 179, and likely there are very many more that I’m simply forgetting. Research indicates it is difficult enough to keep track of 150 people, much less that many people through that many websites.
As an exercise, I’ve made an attempt to list all of the social media and user accounts I’ve had on the web since the early/mid-2000s. They’re listed below at the bottom of this post and broken up somewhat by usage area and subject for ease of use. I’ll maintain an official list of them here.
This partial list may give many others the opportunity to see how fragmented their own identities can be on the web. Who are you and to which communities because you live in multiple different places? I feel the list also shows the immense value inherent in the IndieWeb philosophy to own one’s own domain and data. The value of the IndieWeb is even more apparent when I think of all the defunct, abandoned, shut down, or bought out web services I’ve used which I’ve done my best to list at the bottom.
When I think of all the hours of content that I and others have created and shared on some of these defunct sites for which we’ll never recover the data, I almost want to sob. Instead, I’ve promised only to cry, “Never again!” People interested in more of the vast volumes of data lost are invited to look at this list of site-deaths, which is itself is far from comprehensive.
No more digital sharecropping
Over time, I’ll make an attempt, where possible, to own the data from each of the services listed below and port it here to my own domain. More importantly, I refuse to do any more digital sharecropping. I’m not creating new posts, status updates, photos, or other content that doesn’t live on my own site first. Sure I’ll take advantage of the network effects of popular services like Twitter, Facebook, and Instagram to engage my family, friends, and community who choose to live in those places, but it will only happen by syndicating data that I already own to those services after-the-fact.
What about the interactive parts? The comments and interactions on those social services?
Through the magic of new web standards like WebMention, essentially an internet wide @mention functionality similar to that on Twitter, Medium, and even Facebook, and a fantastic service called brid.gy, all the likes and comments from Twitter, Facebook, Google+, Instagram, and others, I get direct notifications of the comments on my syndicated material which comes back directly to my own website as comments on the original posts. Those with websites that support WebMention natively can write their comments to my posts directly on their own site and rely on it to automatically notify me of their response.
Isn’t this beginning to sound to you like the way the internet should work?
One URL to rule them all
When I think back on setting up these hundreds of digital services, I nearly wince at all the time and effort I’ve spent inputting my name, my photo, or even just including URL links to my Facebook and Twitter accounts.
Now I have one and only one URL that I can care about and pay attention to: my own!
Join me for IndieWebCamp Los Angeles
I’ve written in bits about my involvement with the IndieWeb in the past, but I’ve actually had incoming calls over the past several weeks from people interested in setting up their own websites. Many have asked: what is it exactly? how can they do something similar? is it hard?
My answer is that it isn’t nearly as hard as you might have thought. If you can manage to sign up and maintain your Facebook account, you can put together all the moving parts to have your own IndieWeb enabled website.
“But, Chris, I’m still a little hesitant…”
Okay, how about I (and many others) offer to help you out? I’m going to be hosting IndieWebCamp Los Angeles over the weekend of November 5th and 6th in Santa Monica. I’m inviting you all to attend with the hope that by the time the weekend is over, you’ll have not only a good significant start, but you’ll have the tools, resources, and confidence to continue building in improvements over time.
IndieWebCamp Los Angeles
<
div class=”p-location h-card”>Pivotal 1333 2nd Street, Suite 200 Santa Monica, CA, 90401 United States
It may take me a week or so to finish putting some general thoughts and additional resources together based on the two day conference so that I might give a more thorough accounting of my opinions as well as next steps. Until then, I hope that the details and mini-archive of content below may help others who attended, or provide a resource for those who couldn’t make the conference.
Overall, it was an incredibly well programmed and run conference, so kudos to all those involved who kept things moving along. I’m now certainly much more aware at the gaping memory hole the internet is facing despite the heroic efforts of a small handful of people and institutions attempting to improve the situation. I’ll try to go into more detail later about a handful of specific topics and next steps as well as a listing of resources I came across which may provide to be useful tools for both those in the archiving/preserving and IndieWeb communities.
Archive of materials for Day 2
Audio Files
Below are the recorded audio files embedded in .m4a format (using a Livescribe Pulse Pen) for several sessions held throughout the day. To my knowledge, none of the breakout sessions were recorded except for the one which appears below.
Summarizing archival collections using storytelling techniques
Presentation: Summarizing archival collections using storytelling techniques by Michael Nelson, Ph.D., Old Dominion University
Saving the first draft of history
Special guest speaker: Saving the first draft of history: The unlikely rescue of the AP’s Vietnam War files by Peter Arnett, winner of the Pulitzer Prize for journalism
Kiss your app goodbye: the fragility of data journalism
Panel: Kiss your app goodbye: the fragility of data journalism
Featuring Meredith Broussard, New York University; Regina Lee Roberts, Stanford University; Ben Welsh, The Los Angeles Times; moderator Martin Klein, Ph.D., Los Alamos National Laboratory
The future of the past: modernizing The New York Times archive
Panel: The future of the past: modernizing The New York Times archive
Featuring The New York Times Technology Team: Evan Sandhaus, Jane Cotler and Sophia Van Valkenburg; moderated by Edward McCain, RJI and MU Libraries
Lightning Rounds: Six Presenters
Lightning rounds (in two parts)
Six + one presenters: Jefferson Bailey, Terry Britt, Katherine Boss (and team), Cynthia Joyce, Mark Graham, Jennifer Younger and Kalev Leetaru
1: Jefferson Bailey, Internet Archive, “Supporting Data-Driven Research using News-Related Web Archives” 2: Terry Britt, University of Missouri, “News archives as cornerstones of collective memory” 3: Katherine Boss, Meredith Broussard and Eva Revear, New York University: “Challenges facing preservation of born-digital news applications” 4: Cynthia Joyce, University of Mississippi, “Keyword ‘Katrina’: Re-collecting the unsearchable past” 5: Mark Graham, Internet Archive/The Wayback Machine, “Archiving news at the Internet Archive” 6: Jennifer Younger, Catholic Research Resources Alliance: “Digital Preservation, Aggregated, Collaborative, Catholic” 7. Kalev Leetaru, senior fellow, The George Washington University and founder of the GDELT Project: A Look Inside The World’s Largest Initiative To Understand And Archive The World’s News
Technology and Community
Presentation: Technology and community: Why we need partners, collaborators, and friends by Kate Zwaard, Library of Congress
Breakout: Working with CMS
Working with CMS, led by Eric Weig, University of Kentucky
Alignment and reciprocity
Alignment & reciprocity by Katherine Skinner, Ph.D., executive director, the Educopia Institute
Closing remarks
Closing remarks by Edward McCain, RJI and MU Libraries and Todd Grappone, associate university librarian, UCLA
Live Tweet Archive
Reminder: In many cases my tweets don’t reflect direct quotes of the attributed speaker, but are often slightly modified for clarity and length for posting to Twitter. I have made a reasonable attempt in all cases to capture the overall sentiment of individual statements while using as many original words of the participant as possible. Typically, for speed, there wasn’t much editing of these notes. Below I’ve changed the attribution of one or two tweets to reflect the proper person(s). Fore convenience, I’ve also added a few hyperlinks to useful resources after the fact that didn’t have time to make the original tweets. I’ve attached .m4a audio files of most of the audio for the day (apologies for shaky quality as it’s unedited) which can be used for more direct attribution if desired. The Reynolds Journalism Institute videotaped the entire day and livestreamed it. Presumably they will release the video on their website for a more immersive experience.
Peter Arnett:
Condoms were required issue in Vietnam–we used them to waterproof film containers in the field.
Do not stay close to the head of a column, medics, or radiomen. #warreportingadvice
I told the AP I would undertake the task of destroying all the reporters’ files from the war.
Instead the AP files moved around with me.
Eventually the 10 trunks of material went back to the AP when they hired a brilliant archivist.
“The negatives can outweigh the positives when you’re in trouble.”
Today I spent most of the majority of the day attending the first of a two day conference at UCLA’s Charles Young Research Library entitled “Dodging the Memory Hole: Saving Online News.” While I knew mostly what I was getting into, it hadn’t really occurred to me how much of what is on the web is not backed up or archived in any meaningful way. As a part of human nature, people neglect to back up any of their data, but huge swaths of really important data with newsworthy and historic value is being heavily neglected. Fortunately it’s an interesting enough problem to draw the 100 or so scholars, researchers, technologists, and journalists who showed up for the start of an interesting group being conglomerated through the Reynolds Journalism Institute and several sponsors of the event.
What particularly strikes me is how many of the philosophies of the IndieWeb movement and tools developed by it are applicable to some of the problems that online news faces. I suspect that if more journalists were practicing members of the IndieWeb and used their sites not only for collecting and storing the underlying data upon which they base their stories, but to publish them as well, then some of the (future) archival process may be easier to accomplish. I’ve got so many disparate thoughts running around my mind after the first day that it’ll take a bit of time to process before I write out some more detailed thoughts.
Twitter List for the Conference
As a reminder to those attending, I’ve accumulated a list of everyone who’s tweeted with the hashtag #DtMH2016, so that attendees can more easily follow each other as well as communicate online following our few days together in Los Angeles. Twitter also allows subscribing to entire lists too if that’s something in which people have interest.
Archiving the day
It seems only fitting that an attendee of a conference about saving and archiving digital news, would make a reasonable attempt to archive some of his experience right?! Toward that end, below is an archive of my tweetstorm during the day marked up with microformats and including hovercards for the speakers with appropriate available metadata. For those interested, I used a fantastic web app called Noter Live to capture, tweet, and more easily archive the stream.
Note that in many cases my tweets don’t reflect direct quotes of the attributed speaker, but are often slightly modified for clarity and length for posting to Twitter. I have made a reasonable attempt in all cases to capture the overall sentiment of individual statements while using as many original words of the participant as possible. Typically, for speed, there wasn’t much editing of these notes. I’m also attaching .m4a audio files of most of the audio for the day (apologies for shaky quality as it’s unedited) which can be used for more direct attribution if desired. The Reynolds Journalism Institute videotaped the entire day and livestreamed it. Presumably they will release the video on their website for a more immersive experience.
If you prefer to read the stream of notes in the original Twitter format, so that you can like/retweet/comment on individual pieces, this link should give you the entire stream. Naturally, comments are also welcome below.
Audio Files
Below are the audio files for several sessions held throughout the day.
Greetings and Keynote
Greetings: Edward McCain, digital curator of journalism, Donald W. Reynolds Journalism Institute (RJI) and University of Missouri Libraries and Ginny Steel, university librarian, UCLA
Keynote: Digital salvage operations — what’s worth saving? given by Hjalmar Gislason, vice president of data, Qlik
Why save online news? and NewsScape
Panel: “Why save online news?” featuring Chris Freeland, Washington University; Matt Weber, Ph.D., Rutgers, The State University of New Jersey; Laura Wrubel, The George Washington University; moderator Ana Krahmer, Ph.D., University of North Texas
Presentation: “NewsScape: preserving TV news” given by Tim Groeling, Ph.D., UCLA Communication Studies Department
Born-digital news preservation in perspective
Speaker: Clifford Lynch, Ph.D., executive director, Coalition for Networked Information on “Born-digital news preservation in perspective”
While attending the upcoming conference Dodging the Memory Hole 2016: Saving Online News later this week, I’ll make an attempt to live Tweet as much as possible. (If you’re following me on Twitter on Thursday and Friday and find me too noisy, try using QuietTime.xyz to mute me on Twitter temporarily.) I’ll be using Kevin Marks‘ excellent Noter Live web app to both send out the tweets as well as to store and archive them here on this site thereafter (kind of like my own version of Storify.)
In getting ramped up to live Tweet it, it helps significantly to have a pre-existing list of attendees (and remote participants) talking about #DtMH2016 on Twitter, so I started creating a Twitter list by hand. I realized that it would be nice to have a little bot to catch others as the week progresses. Ever lazy, I turned to IFTTT.com to see if something already existed, and sure enough there’s a Twitter search with a trigger that will allow one to add people who mention a particular hashtag to a Twitter list automatically.
Feel free to follow or subscribe to the list as necessary. Hopefully this will make attending the conference more fruitful for those there live as well as remote.
Not on the list? Just tweet a (non-private) message with the conference hashtag: #DTMH2016 and you should be added to the list shortly.
Lazy like me? Click the bird to tweet: “I’m attending #DtMH2016 @rji | Dodging the Memory Hole 2016: Saving Online News http://ctt.ec/5RKt2+”
IFTTT Recipe for Creating Twitter Lists of Conference Attendees
For those interested in creating their own Twitter lists for future conferences (and honestly the hosts of all conferences should do this as they set up their conference hashtag and announce the conference), below is a link to the ifttt.com recipe I created for this, but which can be modified for use by others.
Naturally, it would also be nice if, as people registered for conferences, they were asked for their Twitter handles and websites so that the information could be used to create such online lists to help create longer lasting relationships both during the event and afterwards as well. (Naturally providing these details should be optional so that people who wish to maintain their privacy could do so.)
In the past few weeks, I’ve seen dozens of news outlets publish multi-paragraph excerpts of speeches from Donald Trump and have been appalled that I was unable to read them in any coherent way. I could not honestly follow or discern any coherent thought or argument in the majority of them. I was a bit shocked because in listening to him, he often sounds like he has some kind of point, though he seems to be spouting variations on one of ten one-liners he’s been using for over a year now. There’s apparently a flaw in our primal reptilian brains that seems to be tricking us into thinking that there’s some sort of substance in his speech when there honestly is none. I’m going to have to spend some time reading more on linguistics and cognitive neuroscience. Maybe Stephen Pinker knows of an answer?
The situation got worse this week as I turned to news sources for fact-checking of the recent presidential debate. While it’s nice to have web-based annotation tools like Genius[1] and Hypothes.is[2] to mark up these debates, it becomes another thing altogether to understand the meaning of what’s being said in order to actually attempt to annotate it. I’ve included some links so that readers can attempt the exercise for themselves.
Recent transcripts (some with highlights/annotations):
It’s been a while since Americans were broadly exposed to actual doubletalk. For the most part our national experience with it has been a passing curiosity highlighted by comedians.
dou·ble-talk
ˈdəblˌtôk/
n. (NORTH AMERICAN)
a deliberately unintelligible form of speech in which inappropriate, invented or nonsense syllables are combined with actual words. This type of speech is commonly used to give the appearance of knowledge and thereby confuse, amuse, or entertain the speaker’s audience.
another term for doublespeak
see also n. doubletalk [3]
Since the days of vaudeville (and likely before), comedians have used doubletalk to great effect on stage, in film, and on television. Some comedians who have historically used the technique as part of their acts include Al Kelly, Cliff Nazarro, Danny Kaye, Gary Owens, Irwin Corey, Jackie Gleason, Sid Caesar, Stanley Unwin, and Reggie Watts. I’m including some short video clips below as examples.
A well-known, but foreshortened, form of it was used by Dana Carvey in his Saturday Night Live performances caricaturizing George H.W. Bush by using a few standard catch phrases with pablum in between: “Not gonna do it…”, “Wouldn’t be prudent at this juncture”, and “Thousand Points of Light…”. These snippets in combination with some creative hand gestures (pointing, lacing fingers together), along with a voice melding of Mr. Rogers and John Wayne were the simple constructs that largely transformed a diminutive comedian convincingly into a president.
Doubletalk also has a more “educated” sibling known as technobabble. Engineers are sure to recall a famous (and still very humorous) example of both doubletalk and technobabble in the famed description of the Turboencabulator.[4] (See also, the short videos below.)
Doubletalk comedy examples
Al Kelly on Ernie Kovaks
Sid Caesar
Technobabble examples
Turboencabulator
Rockwell Turbo Encabulator Version 2
Politicobabble
And of course doubletalk and technobabble have closely related cousins named doublespeak and politicobabble. These are far more dangerous than the others because they move over the line of comedy into seriousness and are used by people who make decisions effecting hundreds of thousands to millions, if not billions, of people on the planet. I’m sure an archeo-linguist might be able to discern where exactly politicobabble emerged and managed to evolve into a non-comedic form of speech which people manage to take far more seriously than its close ancestors. One surely suspects some heavy influence from George Orwell’s corpus of work:
The term “doublespeak” probably has its roots in George Orwell’s book Nineteen Eighty-Four.[5] Although the term is not used in the book, it is a close relative of one of the book’s central concepts, “doublethink”. Another variant, “doubletalk”, also referring to deliberately ambiguous speech, did exist at the time Orwell wrote his book, but the usage of “doublespeak” as well as of “doubletalk” in the sense emphasizing ambiguity clearly postdates the publication of Nineteen Eighty-Four. Parallels have also been drawn between doublespeak and Orwell’s classic essay Politics and the English Language [6] , which discusses the distortion of language for political purposes.
While politicobabble is nothing new, I did find a very elucidating passage from the 1992 U.S. Presidential Election cycle which seems to be a major part of the Trump campaign playbook:
Repetition of a meaningless mantra is supposed to empty the mind, clearing the way for meditation on more profound matters. This campaign has achieved the first part. I’m not sure about the second.
Candidates are now told to pick a theme, and keep repeating it-until polls show it’s not working, at which point the theme vanishes and another takes its place.
The mantra-style repetition of the theme of the week, however, leaves the impression that Teen Talk Barbie has acquired some life-size Campaign Talk Ken dolls. Pull the string and you get: ‘Congress is tough,’ ‘worst economic performance since the Depression,’ or ‘a giant sucking sound south of the border.’
A number of words and phrases, once used to express meaningful concepts, are becoming as useful as ‘ommm’ in the political discourse. Still, these words and phrases have meanings, just not the ones the dictionary originally intended.
In the continuation of the article, Jacobs goes on to give a variety of examples of the term as well as a “translation” guide for some of the common politicobabble words from that particular election. I’ll leave it to the capable hands of others (perhaps in the comments, below?) to come up with the translation guide for our current political climate.
The interesting evolutionary change I’ll note for the current election cycle is that Trump hasn’t delved into any depth on any of his themes to offend anyone significantly enough. This has allowed him to stay with the dozen or so themes he started out using and therefore hasn’t needed to change them as in campaigns of old.
Filling in the Blanks
These forms of pseudo-speech area all meant to fool us into thinking that something of substance is being discussed and that a conversation is happening, when in fact, nothing is really being communicated at all. Most of the intended meaning and reaction to such speech seems to stem from the demeanor of the speaker as well as, in some part, to the reaction of the surrounding interlocutor and audience. In reading Donald Trump transcripts, an entirely different meaning (or lack thereof) is more quickly realized as the surrounding elements which prop up the narrative have been completely stripped away. In a transcript version, gone is the hypnotizing element of the crowd which is vehemently sure that the emperor is truly wearing clothes.
In many of these transcripts, in fact, I find so little is being said that the listener is actually being forced to piece together the larger story in their head. Being forced to fill in the blanks in this way leaves too much of the communication up to the listener who isn’t necessarily engaged at a high level. Without more detail or context to understand what is being communicated, the listener is far more likely to fill in the blanks to fit a story that doesn’t create any cognitive dissonance for themselves — in part because Trump is usually smiling and welcoming towards his adoring audiences.
One will surely recall that Trump even wanted Secretary Clinton to be happy during the debate when he said, “Now, in all fairness to Secretary Clinton — yes, is that OK? Good. I want you to be very happy. It’s very important to me.” (This question also doubles as an example of a standard psychological sales tactic of attempting to get the purchaser to start by saying ‘yes’ as a means to keep them saying yes while moving them towards making a purchase.)
His method of communicating by leaving large holes in his meaning reminds me of the way our brain smooths out information as indicated in this old internet meme[9]:
I cdn’uolt blveiee taht I cluod aulaclty uesdnatnrd waht I was rdanieg: the phaonmneel pweor of the hmuan mnid. Aoccdrnig to a rseearch taem at Cmabrigde Uinervtisy, it deosn’t mttaer in waht oredr the ltteers in a wrod are, the olny iprmoatnt tihng is taht the frist and lsat ltteer be in the rghit pclae. The rset can be a taotl mses and you can sitll raed it wouthit a porbelm. Tihs is bcuseae the huamn mnid deos not raed ervey lteter by istlef, but the wrod as a wlohe. Scuh a cdonition is arpppoiatrely cllaed typoglycemia.
I’m also reminded of the biases and heuristics research carried out in part (and the remainder cited) by Daniel Kahneman in his book Thinking, Fast and Slow[10] in which he discusses the mechanics of how system 1 and system 2 work in our brains. Is Trump taking advantage of the deficits of language processing in our brains in something akin to system 1 biases to win large blocks of votes? Is he creating a virtual real-time Choose-Your-Own-Adventure to subvert the laziness of the electorate? Kahneman would suggest the the combination of what Trump does say and what he doesn’t leaves it up to every individual listener to create their own story. Their system 1 is going to default to the easiest and most palatable one available to them: a happy story that fits their own worldview and is likely to encourage them to support Trump.
Ten Word Answers
As an information theorist, I know all too well that there must be a ‘linguistic Shannon limit’ to the amount of semantic meaning one can compress into a single word. [11] One is ultimately forced to attempt to form sentences to convey more meaning. But usually the less politicians say, the less trouble they can get into — a lesson hard won through generations of political fighting.
I’m reminded of a scene from The West Wing television series. In season 4, episode 6 which aired on October 30, 2002 on NBC, Game On had a poignant moment (video clip below) which is germane to our subject: [12]
Moderator: Governor Ritchie, many economists have stated that the tax cut, which is the centrepiece of your economic agenda, could actually harm the economy. Is now really the time to cut taxes? Governor Ritchie, R-FL: You bet it is. We need to cut taxes for one reason – the American people know how to spend their money better than the federal government does. Moderator: Mr. President, your rebuttal. President Bartlet: There it is…
That’s the 10 word answer my staff’s been looking for for 2 weeks. There it is.
10 word answers can kill you in political campaigns — they’re the tip of the sword.
Here’s my question: What are the next 10 words of your answer?
“Your taxes are too high?” So are mine…
Give me the next 10 words: How are we going to do it?
Give me 10 after that — I’ll drop out of the race right now.
Every once in a while — every once in a while, there’s a day with an absolute right and an absolute wrong, but those days almost always include body counts. Other than that there aren’t very many un-nuanced moments in leading a country that’s way too big for 10 words.
I’m the President of the United States, not the president of the people who agree with me. And by the way, if the left has a problem with that, they should vote for somebody else.
As someone who studies information theory and complexity theory and even delves into sub-topics like complexity and economics, I can agree wholeheartedly with the sentiment. Though again, here I can also see the massive gaps between system 1 and 2 that force us to want to simplify things down to such a base level that we don’t have to do the work to puzzle them out.
(And yes, that is Jennifer Anniston’s father playing the moderator.)
One can’t but wonder why Mr. Trump doesn’t seem to have ever gone past the first ten words? Is it because he isn’t capable? interested? Or does he instinctively know better? It would seem that he’s been doing business by using the uncertainty inherent in his speech for decades, but always operating by using what he meant (or thought he wanted to mean) than what the other party heard and thought they understood. If it ain’t broke, don’t fix it.
Idiocracy or Something Worse?
In our increasingly specialized world, people eventually have to give in and quit doing some tasks that everyone used to do for themselves. Yesterday I saw a lifeworn woman in her 70s pushing a wheeled wire basket with a 5 gallon container of water from the store to her home. As she shuffled along, I contemplated Thracian people from fourth century BCE doing the same thing except they likely carried amphorae possibly with a yoke and without the benefit of the $10 manufactured custom shopping cart. 20,000 years before that people were still carrying their own water, but possibly without even the benefit of earthenware containers. Things in human history have changed very slowly for the most part, but as we continually sub-specialize further and further, we need to remember that we can’t give up one of the primary functions that makes us human: the ability to think deeply and analytically for ourselves.
I suspect that far too many people are too wrapped up in their own lives and problems to listen to more than the ten word answers our politicians are advertising to us. We need to remember to ask for the next ten words and the ten after that.
Otherwise there are two extreme possible outcomes:
We’re either at the beginning of what Mike Judge would term Idiocracy. [13]
Here, one is tempted to quote George Santayana’s famous line (from The Life of Reason, 1905), “Those who cannot remember the past are condemned to repeat it.” However, I far prefer the following as more apropos to our present national situation:
When the situation was manageable it was neglected, and now that it is thoroughly out of hand we apply too late the remedies which then might have effected a cure. There is nothing new in the story. It is as old as the sibylline books. It falls into that long, dismal catalogue of the fruitlessness of experience and the confirmed unteachability of mankind. Want of foresight, unwillingness to act when action would be simple and effective, lack of clear thinking, confusion of counsel until the emergency comes, until self-preservation strikes its jarring gong–these are the features which constitute the endless repetition of history.
Sir Winston Leonard Spencer-Churchill (—), a British statesman, historian, writer and artist,
in House of Commons, 2 May 1935, after the Stresa Conference, in which Britain, France and Italy agreed—futilely—to maintain the independence of Austria.
If Cliff Navarro comes back to run for president, I hope no one falls for his joke just because he wasn’t laughing as he acted it out. If his instructions for fixing the wagon (America) are any indication, the voters who are listening and making the repairs will be in severe pain.
In an effort to provide easier commuting access for a broader cross-section of Homebrew members we met last night at Yahoo’s Yahoo’s primary offices at 11995 W. Bluff Creek Drive, Playa Vista, CA 90094. We hope to alternate meetings of the Homebrew Website Club between the East and West sides of Los Angeles as we go forward. If anyone has additional potential meeting locations, we’re always open to suggestions as well as assistance.
We had our largest RSVP list to date, though some had last minute issues pop up and one sadly had trouble finding the location (likely due to a Google map glitch).
Angelo and Chris met before the quiet writing hour to discuss some general planning for future meetings as well as the upcoming IndieWebCamp in LA in November. Details and help for arrangements for out of town attendees should be posted shortly.
We sketched out a way to help Srikanth IndieWeb-ify not only his own site, but to potentially help do so for Katie Couric’s Yahoo! based news site along with the pros/cons of workflows for journalists in general. We also considered some potential pathways for potentially bolting on webmentions for websites (like Tumblr/WordPress) which utilize Disqus for their commenting system. We worked through the details of webmentions and a bit of micropub for his benefit.
Srikanth discussed some of the history and philosophy behind why Tumblr didn’t have a more “traditional” native commenting system. The point was generally to socially discourage negativity, spamming, and abuse by forcing people to post their comments front and center on their own site (and not just in the “comments” of the receiving site) thereby making the negativity be front and center and redound to their own reputation rather than just the receiving page of the target. Most social media related sites hide (or make hard to search/find) the abusive nature of most users, while allowing them to appear better/nicer on their easier-to-find public facing persona.
Before closing out the meeting officially, we stopped by the front lobby where two wonderful and personable security guards (one a budding photographer) not only helped us with a group photo, but managed to help us escape the parking lot!
I think it’s agreed we all had a great time and look forward to more progress on projects, more good discussion, and more interested folks at the next meeting. Srikanth was so amazed at some of the concepts, it’s possible that all of Yahoo! may be IndieWeb-ified by the end of the week. 🙂
Ever with grand aspirations to do as good a job as the illustrious Kevin Marks, we tried some livetweeting with Noterlive. Alas the discussion quickly became so consuming that the effort was abandoned in lieu of both passion and fun. Hopefully some of the salient points were captured above in better form anyway.
I only use @drupal when I want to make money. (Replying to why his personal site was on @wordpress.) #
(This CMS comment may have been the biggest laugh of the night, though the tone captured here (and the lack of context), doesn’t do the comment any justice at all.)
For those who missed the first class of Introduction to Complex Analysis on 09/20/16, I’m attaching a link to the downloadable version of the notes in Livescribe’s Pencast .pdf format. This is a special .pdf file but it’s a bit larger in size because it has an embedded audio file in it that is playable with the more recent version of Adobe Reader X (or above) installed. (This means to get the most out of the file you have to download the file and open it in Reader X to get the audio portion. You can view the written portion in most clients, you’ll just be missing out on all the real fun and value of the full file.) [Editor’s note: Don’t we all wish Dr. Tao’s class was recording his lectures this way.]
With these notes, you should be able to toggle the settings in the file to read and listen to the notes almost as if you were attending the class live. I’ve done my best to write everything exactly as it was written on the board and only occasionally added small bits of additional text.
If you haven’t registered yet, you can watch the notes as if you were actually in the class and still join us next Tuesday night without missing a beat. There are over 25 people in the class not counting several I know who had to miss the first session.
Hope to see you then!
Viewing and Playing a Pencast PDF
Pencast PDF is a new format of notes and audio that can play in Adobe Reader X or above.
You can open a Pencast PDF as you would other PDF files in Adobe Reader X. The main difference is that a Pencast PDF can contain ink that has associated audio—called “active ink”. Click active ink to play its audio. This is just like playing a Pencast from Livescribe Online or in Livescribe Desktop. When you first view a notebook page, active ink appears in green type. When you click active ink, it turns gray and the audio starts playing. As audio playback continues, the gray ink turns green in synchronization with the audio. Non-active ink (ink without audio) is black and does not change appearance.
Audio Control Bar
Pencast PDFs have an audio control bar for playing, pausing, and stopping audio playback. The control bar also has jump controls, bookmarks (stars), and an audio timeline control.
Active Ink View Button
There is also an active ink view button. Click this button to toggle the “unwritten” color of active ink from gray to invisible. In the default (gray) setting, the gray words turn green as the audio plays. In the invisible setting, green words seem to write themselves on blank paper as the audio plays.
I’ve run across some of his work before, but I ran into some new material by Hector Zenil that will likely interest those following information theory, complexity, and computer science here. I hadn’t previously noticed that he refers to himself on his website as an “information theoretic biologist” — everyone should have that as a title, shouldn’t they? As a result, I’ve also added him to the growing list of ITBio Researchers.
If you’re not following him everywhere (?) yet, start with some of the sites below (or let me know if I’ve missed anything).
A common practice in the estimation of the complexity of objects, in particular of graphs, is to rely on graph- and information-theoretic measures. Here, using integer sequences with properties such as Borel normality, we explain how these measures are not independent of the way in which a single object, such a graph, can be described. From descriptions that can reconstruct the same graph and are therefore essentially translations of the same description, we will see that not only is it necessary to pre-select a feature of interest where there is one when applying a computable measure such as Shannon Entropy, and to make an arbitrary selection where there is not, but that more general properties, such as the causal likeliness of a graph as a measure (opposed to randomness), can be largely misrepresented by computable measures such as Entropy and Entropy rate. We introduce recursive and non-recursive (uncomputable) graphs and graph constructions based on integer sequences, whose different lossless descriptions have disparate Entropy values, thereby enabling the study and exploration of a measure’s range of applications and demonstrating the weaknesses of computable measures of complexity.
Subjects: Information Theory (cs.IT); Computational Complexity (cs.CC); Combinatorics (math.CO)
Cite as: arXiv:1608.05972 [cs.IT] (or arXiv:1608.05972v4 [cs.IT]
YouTube
Yesterday he also posted two new introductory videos to his YouTube channel. There’s nothing overly technical here, but they’re nice short productions that introduce some of his work. (I wish more scientists did communication like this.) I’m hoping he’ll post them to his blog and write a bit more there in the future as well.
Cross-boundary Behavioural Reprogrammability Reveals Evidence of Pervasive Turing Universality by Jürgen Riedel, Hector Zenil
Preprint available at http://arxiv.org/abs/1510.01671
Ed.: 9/7/16: Updated videos with links to relevant literature
If you view a single photo permalink page, the following bookmarklet will extract the permalink (trimmed), photo jpg URL, and photo caption and copy them into a text note, suitable for posting as a photo that’s auto-linked:
javascript:n=document.images.length-1;s=document.images[n].src;s=s.split('?');s=s[0];u=document.location.toString().substring(0,39);prompt('Choose "Copy ⌘C" to copy photo post:',s+' '+u+'\n'+document.images[n].alt.toString().replace(RegExp(/\.\n(\.\n)+/),'\n'))
Any questions, let me know! –Tantek
If you want an easy drag-and-drop version, just drag the button below into your browser’s bookmark bar.
Editor’s note: Though we’ll try to keep the code in this bookmarklet updated, the most recent version can be found on the Indieweb wiki thought the link above.
There is a multi-lingual low-budget movie shooting across the street from me in which Trump is terrorizing some Spanish speaking gardeners!
It doesn’t appear to be a comedy and Trump is grumbling as if he’s a Zombie!
There were some more-than-steamy scenes (shot behind the neighbors’ bushes) which are NSFW, so they won’t appear here.
I won’t spoil the ending, but the last shot I saw involved the cinematographer lying on the ground shooting up at a gardner with a shovel standing over him menacingly.
We met at Charlie’s Coffee House, 266 Monterey Road, South Pasadena, CA, where we stayed until closing at 8:00. Deciding that we hadn’t had enough, we moved the party (South Pasadena rolls up their sidewalks early) over to the local Starbucks, 454 Fair Oaks Ave, South Pasadena, CA where we stayed until they closed at 11:00pm.
Quiet Writing Hour
Angelo manned the fort alone with aplomb while building intently. If I’m not mistaken, he did use my h-card to track down my phone number to see what was holding me up, so as they say in IRC: h-card++!
Needing no introductions this week, Angelo launched us off with a relatively thorough demo of his Canopy platform which he’s built from the ground up in python! Starting from an empty folder on a host with a domain name, he downloaded and installed his code directly from Github and spun up a completely new version of his site in under 2 minutes. In under 20 minutes of some simple additional downloads and configuration of a few files, he also had locations, events, people and about modules up and running. Despite the currently facile appearance of his website, there’s really a lot of untapped power in what he’s built so far. It’s all available on Github for those interested in playing around; I’m sure he’d appreciate pull requests.
Along the way, I briefly demoed some of the functionality of Kevin Marks’ deceptively powerful Noterlive web app for not only live tweeting, but also owning those tweets on one’s own site in a simple way after the fact (while also automatically including proper markup and microformats)! I also ran through some of the overall functionality of my Known install with a large number of additional plugins to compare and contrast UX/UI with respect to Canopy.
We also discussed a bit of Angelo’s recent Indieweb Graph network crawling project, and I took the opportunity to fix a bit of the representative h-card on my site. (Angelo, does a new crawl appear properly on lahacker.net now?)
Before leaving Charlie’s we did manage to remember to take a group photo this time around. Not having spent enough time chatting over the past few weeks, we decamped to a local Starbucks and continued our conversation along with some addition brief demos and discussion of other itches for future building.
We also spent a few minutes discussing the upcoming IndieWebCamp LA logistics for November as well as outreach to the broader Los Angeles area dev communities. If you’re interested in attending, please RSVP. If you’d like to volunteer or help sponsor the camp, please don’t hesitate to contact either of us. I’m personally hoping to attend DrupalCamp LA this weekend while wearing a stylish IndieWebCamp t-shirt that’s already on its way to me.
IndieWebCamp T-shirt
Next Meeting
In keeping with the schedule of the broader Homebrew movement, so we’re already committed to our next meeting on September 7. It’s tentatively at the same location unless a more suitable one comes along prior to then. Details will be posted to the wiki in the next few days.
Thanks for coming everyone! We’ll see you next time.
Live Tweets Archive
Though not as great as the notes that Kevin Marks manages to put together, we did manage to make good use of noterlive for a few supplementary thoughts:
This morning while breezing through my Woodwind feed reader, I ran across a post by Rick Mendes with the hashtags #readlater and #readinglist which put me down a temporary rabbit hole of thought about reading-related post types on the internet.
I’m obviously a huge fan of reading and have accounts on GoodReads, Amazon, Pocket, Instapaper, Readability, and literally dozens of other services that support or assist the reading endeavor. (My affliction got so bad I started my own publishing company last year.)
READ LATER is an indication on (or relating to) a website that one wants to save the URL to come back and read the content at a future time.
I started a page on the IndieWeb wiki to define read later where I began writing some philosophical thoughts. I decided it would be better to post them on my own site instead and simply link back to them. As a member of the Indieweb my general goal over time is to preferentially quit using these web silos (many of which are listed on the referenced page) and, instead, post my reading related work and progress here on my own site. Naturally, the question becomes, how does one do this in a simple and usable manner with pretty and reasonable UX/UI for both myself and others?
Current Use
Currently I primarily use a Pocket bookmarklet to save things (mostly newspaper articles, magazine pieces, blog posts) for reading later and/or the like/favorite functionality in Twitter in combination with an IFTTT recipe to save the URL in the tweet to my Pocket account. I then regularly visit Pocket to speed read though articles. While Pocket allows downloading of (some) of one’s data in this regard, I’m exploring options to bring in the ownership of this workflow into my own site.
For more academic leaning content (read journal articles), I tend to rely on an alternate Mendeley-based workflow which also starts with an easy-to-use bookmarklet.
I’ve also experimented with bookmarking a journal article and using hypothes.is to import my highlights from that article, though that workflow has a way to go to meet my personal needs in a robust way while still allowing me to own all of my own data. The benefit is that fixing it can help more than just myself while still fitting into a larger personal workflow.
Brainstorming
A Broader Reading (Parent) Post-type
Philosophically a read later post-type could be considered similar to a (possibly) unshared or private bookmark with potential possible additional meta-data like: progress, date read, notes, and annotations to be added after the fact, which then technically makes it a read post type.
A potential workflow viewed over time might be: read later >> bookmark >> notes/annotations/marginalia >> read >> review. This kind of continuum of workflow might be able to support a slightly more complex overall UI for a more simplified reading post-type in which these others are all sub-types. One could then make a single UI for a reading post type with fields and details for all of the sub-cases. Being updatable, the single post could carry all the details of one’s progress.
Indieweb encourages simplicity (DRY) and having the fewest post-types possible, which I generally agree with, but perhaps there’s a better way of thinking of these several types. Concatenating them into one reading type with various data fields (and the ability of them to be public/private) could allow all of the subcategories to be included or not on one larger and more comprehensive post-type.
Examples
Not including one subsection (or making it private), would simply prevent it from showing, thus one could have a traditional bookmark post by leaving off the read later, read, and review sub-types and/or data.
As another example, I could include the data for read later, bookmark, and read, but leave off data about what I highlighted and/or sub-sections of notes I prefer to remain private.
A Primary Post with Webmention Updates
Alternately, one could create a primary post (potentially a bookmark) for the thing one is reading, and then use further additional posts with webmentions on each (to the original) thereby adding details to the original post about the ongoing progress. In some sense, this isn’t too far from the functionality provided by GoodReads with individual updates on progress with brief notes and their page that lists the overall view of progress. Each individual post could be made public/private to allow different viewerships, though private webmentions may be a hairier issue. I know some are also experimenting with pushing updates to posts via micropub and other methods, which could be appealing as well.
This may be cumbersome over time, but could potentially be made to look something like the GoodReads UI below, which seems very intuitive. (Note that it’s missing any review text as I’m currently writing it, and it’s not public yet.)
Overview of reading progress
Other Thoughts
Ideally, better distinguishing between something that has been bookmarked and read/unread with dates for both the bookmarking and reading, as well as potentially adding notes and highlights relating to the article is desired. Something potentially akin to Devon Zuegel‘s “Notes” tab (built on a custom script for Evernote and Tumblr) seems somewhat promising in a cross between a simple reading list (or linkblog) and a commonplace book for academic work, but doesn’t necessarily leave room for longer book reviews.
I’ll also need to consider the publishing workflow, in some sense as it relates to the reverse chronological posting of updates on typical blogs. Perhaps a hybrid approach of the two methods mentioned would work best?
Kindle Notes and Highlights are now shoing up as a beta feature in GoodReads
Comments
I’ll keep thinking about the architecture for what I’d ultimately like to have, but I’m always open to hearing what other (heavy) readers have to say about the subject and the usability of such a UI.
Please feel free to comment below, or write something on your own site (which includes the URL of this post) and submit your URL in the field provided below to create a webmention in which your post will appear as a comment.