I’ve been wringing my hands for a week or more since people started posting their IndieWeb Commitments for 2017, and even more so once I realized that it wasn’t a commitment to ship something within the year of 2017 but for something by New Year’s Day 2017. (I suppose I’ll take my prior more ambitious thoughts and turn them into an IndieWeb resolution (or two) for 2018.
But I suppose since I tweak something or other once every week or two anyway in small increments, it shouldn’t have to be too hard.
2017 Commitment
Since it’s been an itch for a while and because I’ve been slowly owning more and more of my web-centric activity here to the point that my longer article-length content is being swamped by smaller status updates and other smaller “digital exhaust” I would commit to:
Fix my site’s subscription/mail functionality so that I can better control what current subscribers get and allow for more options for future subscribers.
I’d like to allow long time subscribers to keep receiving the longer form thought out content they’ve been getting without overwhelming them with the other material (status updates, photos, reading updates, etc.) which is potentially more interesting to only me or a much smaller subsection of my readers. I’ve been doing some of this manually for a while, but it’s time to fix it. For example, I’d like to allow people to subscribe only to longer-form articles or to status updates/notes, or to all of the above.
Stretch Goal
As a stretch goal, I’d started setting up a monthly email newsletter for even less frequent updates back in the summer, and it’s long overdue to not only finish it off, but to turn it on and ship one. Why? Because we all know that people love end of the year recap stories…
Ah, Vine. I loved the idea of a platform for sharing tiny video moments. It was truly a platform for some really amazing things.Personally, I didn't make ver...
There are some additional methods and discussion here.
As part of my evolving IndieWeb experience of owning all of my own internet-based social data, last year I wanted a “quick and dirty” method for owning and displaying all of my Twitter activity before embarking on a more comprehensive method of owning all of my past tweets in a much more comprehensive way. I expected even a quick method to be far harder than the ten minute operation it turned out to be.
Back in early October, I had also replied to a great post by Jay Rosen when he redesigned his own blog PressThink. I saw a brief response from him on Twitter at the time, but didn’t get a notification from him about his slightly longer reply, which I just saw over the weekend:
So, for his benefit as well as others who are interested in the ability to do something like this quickly and easily, I thought I’d write up a short outline of what I’d originally done so that without spending all the time I did, others can do the same or something similar depending on their needs.
Near the bottom of the page you should see a “Your Twitter archive” section
See the Request your archive button? Click it.
After a (hopefully) short wait, a link to your archive should show up in your email associated with the account. Download it.
Congratulations, you now own all of your tweets to date!
You can open the index.html file in the downloaded folder to view all of your tweets locally on your own computer with your browser.
Display your Twitter archive
The best part is now that you’ve got all your tweets downloaded, you can almost immediately serve them from your own server without any real modification.
Simply create an (accessible–use the same permissions as other equivalent files) folder named twitter on your server and upload all the files from your download into it. You’re done. It’s really that simple!
In my case I created a subfolder within my WordPress installation, named it “twitter”, and uploaded the files. Once this is done, you should be able to go to the URL http://example.com/twitter and view them.
Alternately one could set up a subdomain (eg. http://twitter.example.com) and serve them from there as well. You can change the URL by changing the name of the folder. As an alternate example, Kevin Marks uses the following: http://www.kevinmarks.com/tweets/.
When you’re done, don’t forget to set up a link from your website (perhaps in the main menu?) so that others can benefit from your public archive. Mine is tucked in under the “Blog” heading in my main menu.
Caveats
Unfortunately, while you’ve now got a great little archive with some reasonable UI and even some very powerful search capabilities, most of the links on the archive direct back to the originals on Twitter and don’t provide direct permalinks within the archive. It’s also a static archive, so you’ve periodically got to re-download and upload to keep your archive current. I currently only update mine on a quarterly basis, at least until I build a more comprehensive set up.
Current Set Up
At the moment, I’m directly owning all of my Twitter activity on my social stream site, which is powered by Known, using the POSSE philosophy (Post on your Own Site, Syndicate Elsewhere). There I compose and publish all of my Tweets and re-Tweets (and even some likes) directly and then I syndicate them to Twitter in real-time. I’ve also built and documented a workflow for more quickly tweeting using my cell phone in combination with either the Twitter mobile app or their mobile site. (Longer posts here on BoffoSocko are also automatically syndicated (originally with JetPack and currently with Social Network Auto-Poster, which provides a lot more customization) to Twitter, so I also own all of that content directly too.)
You’ll notice that on both sites, when content has been syndicated, there’s a section at the bottom of the original posts that indicates to which services the content was syndicated along with permalinks to those posts. I’m using David Shanske’s excellent Syndication Links plugin to do this.
Ultimately, I’d like to polish the workflow a bit and post all of my shorter Twitter-like status updates from BoffoSocko.com, but I still have some work to do to better differentiate content so that my shorter form content doesn’t muddy up or distract from the people who prefer to follow my longer-form content. Based on his comment, I also suspect that this is the same semantic issue/problem that Jay Rosen has. I’d also like to provide separate feeds/subscription options so that people can more easily consume as much or as little content from my site as they’d like.
Next steps
For those who are interested in more comprehensive solutions for owning and displaying their Tweets, I’ve looked into a few WordPress-based possibilities and like the following two which could also be potentially modified for custom display:
Ozh’ Tweet Archiver (Separately available on GitHub with scripts [.csv, JSON] for importing more than 3200 Tweets limit imposed by Twitter API; it also has a custom “Twitter” theme available; for additional support and instructions there are additional blogposts available. [1][2]
Both of these not only allow you to own and display your tweets, but they also automatically import new Tweets using the current API. Keep in mind that they use the PESOS philosophy (Post Elsewhere, Syndicate to your Own Site) which is less robust than POSSE, mentioned above.
I’ll note that a tremendous number of WordPress-based plugins within the plugin repository that are Twitter related predate some of the major changes in Twitter’s API in the last year or two and thus no longer work and are no longer supported, so keep this in mind if you attempt to explore other solutions.
Those with more coding ability or wokring on other CMS platforms may appreciate a larger collection of thought and notes on the Twitter wiki page created by the IndieWeb Community. [3]
Thoughts?
Do you own your own Tweets (either before or after-the-fact)? How did you do it? Feel free to tell others about your methods in the comments, or better yet, write them on your own site and send this post a webmention (see details below).
The IndieWeb movement is coding, collecting, and disseminating UI, UX, methods, and opensource code to help all netizens to better control their online identities, communicate, and connect themselves to others at IndieWeb.org. We warmly invite you to join us.
After more than five years of operation, the Readability article bookmarking/read-it-later service will be shutting down after September 30…
I really wish I’d heard about this before September! And certainly before today… I know I used it fairly frequently in the early days of the service. I do remember that they did have a some nice functionality for sending articles to the Amazon Kindle too. Not sure how much data I may have lost in this particular shutdown, but I do wish I’d had a chance to back it up.
I am glad that bookmarks are one of the post types that I’m now saving by posting on my own site first though. For more of my thoughts on these post types, take a look at:
As of October 30, 2016, I’ve slowly but surely begun posting what I’m actively reading online to my blog.
I’ve refined the process a bit in the last couple of weeks, and am becoming relatively happy with the overall output. For those interested, below is the general process/workflow I’m using:
As I read a website, I use a browser extension (there’s also a bookmarklet available) linked to my Reading.am account to indicate that I’m currently reading a particular article.
I have an IFTTT.com applet that scrapes the RSS feed of my Reading account for new entries (in near real-time) and this creates a new WordPress draft post on my blog. I did have to change my IFTTT.com settings not to use their custom URL shortener to make things easier and to prevent future potential link-rot.
Shortly after I’m done reading, I receive a notification of the creation of the draft post to remind me to (optionally) post my comments/thoughts to the draft post. If necessary, I make any additional modifications or add tags to the post.
I publish the post; and
Optionally, I send POSSE copies to other silos like Facebook, Twitter, or Google+ to engage with other parts of my network.
Status updates of this type also have a pre-included O-embed with a synopsis of the content if the bookmarked site supports it, otherwise, a blockquoted synopsis stripped from the site’s meta-data is included.
Other near-term improvements may include custom coding something via the available Reading.am hooks to directly integrate with the WordPress Post Kinds plugin to use the URL post pattern http://www.yoursite.com/wp-admin/post-new.php?kind=read&kindurl=@url to shorten the workflow even further. Post Kinds automatically handles the wrapping of the post data in the appropriate microformats automatically. I also want to add a tidbit so that when I make my post I ping the Internet archive with the URL of the article I read so that it will be archived for future potential reference (hat tip to Jeremy Kieth for giving me the idea at IndieWebCamp LA a few weeks ago.)
I had originally played around with using the Post Kinds bookmarklet method directly, but this got in the way of the immediacy of reading the particular article for me. Using a PESOS method allows me to read and process the article a bit first before writing commentary or other details. I may also integrate a Hypothes.is based workflow into this process in which I use the hypothes.is browser etension to highlight and annotate the article and then use the Hypothes.is Aggregator Plugin to embed those thoughts into the post via shortcodes. The following post serves as a rough example of this, though the CSS for it could stand a bit of work: Chris Aldrich is reading WordPress Without Shame.
I was a bit surprised that Reading.am didn’t already natively support a WordPress pathway though it has a custom set up for Tumblr as well as a half a dozen other silos. Perhaps they’ll support WordPress in the future?
After having spent the weekend at IndieWebCamp Los Angeles, it somehow seems appropriate to have a “Voted post type” for the election today†. To do it I’m proposing the following microformats, an example of which can be found in the mark up of the post above. This post type is somewhat similar to both a note/status update and an RSVP post type with a soupçon of checkin.
Basic markup
<div class="h-entry">
<span class="p-voted">I voted</span>
in the <a href="http://example.com/election" class="u-voted-in">November 8th, 2016 Election</a>
</div>
Possible Voted values: I voted, I didn’t vote, I was disenfranchised, I was intimidated, I was apathetic, I pathetically didn’t bother to register
Send a Webmention to the election post of your municipality’s Registrar/Clerk/Records office as you would for a reply to any post.
You should include author information in your Voted post so the registrar knows who voted (and then send another Webmention so the voting page gets the update).
Here’s another example with explicit author name and icon, in case your site or blog does not already provide that on the page.
While I was updating Indieweb/site-deaths, I was reminded to download my TwitPic archive. It sold to Twitter almost two years ago this week and has been largely inactive since.
It includes some of the earliest photos I ever took and posted online via mobile phone. Looking at the quality, it’s interesting to see how far we’ve come. It’s also obvious why photo filters became so popular.
02/07/2009 UCLA is spanking Notre Dame on an early rainy morning basketball game.
05/07/2009 Beverly Hills Hotel
05/09/2009
05/17/2009 A gray sunset in Laguna Beach
01/20/2010 Breakfast with a Hopkins Student at One World Cafe
12/16/2009 Veggie Cafe with Rama Kunapuli
12/12/2009 A visit out to Jason Calacanis’ office
11/26/2009 Visit to In-N-Out Glendale
05/22/2009 Excellent breakfast at Athenaeum & tour of interesting microscopy lab at Caltech; only photo I got was this??
07/19/2008 Back down to the first floor @WholeFoods
07/17/2008 Dinner at Hugo’s
07/19/2008 Meat! (at Whole Foods Market Pasadena)
07/19/2008 The 2nd floor of @WholeFoods in Pasadena is bigger than most average grocery stores. Is that a restaurant over there?
05/16/2009 Hello Laguna Beach…
06/26/2009 Massive media zoo at UCLA for passing of Michael Jackson! 32 SAT trucks. LAPD army camped out.
11/13/2009 (The house on Dartmouth Drive that I bought.)
09/20/2008 UCLA vs Arizona at the Rose Bowl
09/13/2008 Steven Chan session at #drupalcampla
05/15/2009 (Art gallery in downtown LA)
01/06/2012
05/30/2009 Jason’s pre-wedding jitters finally tamed with a project: Papier Mache!
05/30/2009 Breakfast with @viperwriter on the day of his wedding.
11/10/2010 (Mike Miller’s UCLA math class)
03/13/2010 (A Johns Hopkins University Event in Los Angeles)
07/09/2009 In keeping with the zeitgeist, I’m enjoying birthday cupcakes for dessert!
06/28/2009 Nice chat with Bradley Whitford. Gave me autograph for Sonia:”Santos 2012?! – BW” Such a good guy.
06/26/2009 Michael Jackson media circus dying down. Only 9 satellite trucks at 12:42am PST outside UCLA Med
06/26/2009 Hanging out with Nobel Prizewinner Martin Chalfe in front of my poster on GFP.
06/26/2009 Massive media zoo at UCLA for passing of Michael Jackson! 32 SAT trucks. LAPD army camped out.
06/06/2009 Afternoon snack with John Astin
06/06/2009 Breakfast at One World Cafe
06/05/2009 Having lunch with PM Forni at Gertrudes.
08/03/2008 Out for an evening constitutional in the cooling San Marino Breeze
05/31/2009 The first sunrise on the beginning of Jason & Molly’s marriage. Good luck kids!
05/31/2009 Rosy-fingered dawn touches the sky over Delaware at 5:30am.
05/01/2009 Can you tell how I know this BMW is driven by an IT professional? Taken on 110 S into downtown Los Angeles this morning.
10/06/2010 (A peek into the social media class at UCLA)
04/02/2009 Absorbing genius from Dr. Sol Golomb (as he teaches combinatorics)
02/27/2009 Hanging out with Communications Gods Andrew Viterbi, Sol Golomb, and Robert Gray
I recently started to own all my photos being posted to Flickr using POSSE. I just owned my first “like” coming back via Brid.gy! Thanks Brid.gy and IndieWeb!
Not a day goes by that I don’t run across a fantastic blog built or hosted on WordPress that looks gorgeous–they do an excellent job of making this pretty easy to accomplish.
but…
Invariably the blog’s author has a generic avatar (blech!) instead of a nice, warm and humanizing photo of their lovely face.
Or, perhaps, as a user, you’ve always wondered how some people qualified to have their photo included with their comment while you were left as an anonymous looking “mystery person” or a randomized identicon, monster, or even an 8-bit pixelated blob? The secret the others know will be revealed momentarily.
Which would you prefer?
Somehow, knowing how to replace that dreadful randomized block with an actual photo is too hard or too complicated. Why? In part, it’s because WordPress separated out this functionality as a decentralized service called Gravatar, which stands for Globally Recognized Avatar. In some sense this is an awesome idea because then people everywhere (and not just on WordPress) can use the Gravatar service to change their photo across thousands of websites at once. Unfortunately it’s not always clear that one needs to add their name, email address, and photo to Gravatar in order for the avatars to be populated properly on WordPress related sites.
(Suggestion for WordPress: Maybe the UI within the user account section could include a line about Gravatars?)
So instead of trying to write out the details for the third time this week, I thought I’d write it once here with a bit more detail and then point people to it for the future.
Another quick example
Can you guess which user is the blog’s author in the screencapture?
The correct answer is Anand Sarwate, the second commenter in the list. While Anand’s avatar seems almost custom made for a blog on randomness and information theory, it would be more inviting if he used a photo instead.
How to fix the default avatar problem
What is Gravatar?
Your Gravatar is an image that follows you from site to site appearing beside your name when you do things like comment or post on a blog. Avatars help identify your posts on blogs and web forums, so why not on any site?
Need some additional motivation? Watch this short video:
[wpvideo HNyK67JS]
Step 1: Get a Gravatar Account
If you’ve already got a WordPress.com account, this step is easy. Because the same corporate parent built both WordPress and Gravatar, if you have an account on one, you automattically have an account on the other which uses the same login information. You just need to log into Gravatar.com with your WordPress username and password.
If you don’t have a WordPress.com account or even a blog, but just want your photo to show up when you comment on WordPress and other Gravatar enabled blogs, then just sign up for an account at Gravatar.com. When you comment on a blog, it’ll ask for your email address and it will use that to pull in the photo to which it’s linked.
Step 2: Add an email address
Log into your Gravatar account. Choose an email address you want to modify: you’ll have at least the default you signed up with or you can add additional email addresses.
Step 3: Add a photo to go with that email address
Upload as many photos as you’d like into the account. Then for each of the email addresses you’ve got, associate each one with at least one of your photos.
Example: In the commenters’ avatars shown above, Anand was almost there. He already had a Gravatar account, he just hadn’t added any photos.
Step 4: Fill out the rest of your social profile
Optionally you can additional social details like a short bio, your other social media presences, and even one or more websites or blogs that you own.
Step 5: Repeat
You can add as many emails and photos as you’d like. By linking different photos to different email addresses, you’ll be able to change your photo identity based on the email “key” you plug into sites later.
If you get tired of one photo, just upload another and make it the default photo for the email addresses you want it to change for. All sites using Gravatar will update your avatar for use in the future.
Step 6: Use your email address on your WordPress account
In the field for the email, input (one of) the email(s) you used in Gravatar that’s linked to a photo.
Don’t worry, the system won’t show your email and it will remain private–WordPress and Gravatar simply use it as a common “key” to serve up the right photo and metadata from Gravatar to the WordPress site.
Once you’ve clicked save, your new avatar should show up in the list of users. More importantly it’ll now show up in all of the WordPress elements (like most author bio blocks and in comments) that appear on your site.
Administrator Caveats
WordPress themes need to be Gravatar enabled to be able to use this functionality, but in practice, most of them do, particularly for comments sections. If yours isn’t, then you can usually add it with some simple code.
In the WordPress admin interface one can go to Settings>>Discussion and enable View people's profiles when you mouse over their Gravatars under the heading “Gravatar Hovercards” to enable people to see more information about you and the commenters on your blog (presuming the comment section of your theme is Gravatar enabled.)
Some WordPress users often have several user accounts that they use to administer their site. One might have a secure administrator account they only use for updates and upgrades, another personal account (author/editor admin level account which uses their name) for authoring posts, and another (author/editor admin level) account for making admin notice posts or commenting as a generic moderator. In these cases, you need to make sure that each of these accounts has an email address with an an associated Gravatar account with the same email and the desired photo linked to it. (One Gravatar account with multiple emails/photos will usually suffice, though they could be different.)
Example: In Nate’s case above, we showed that his photo didn’t show in the author bio box, and it doesn’t show up in some comments, but it does show up in other comments on his blog. This is because he uses at least two different user accounts: one for authoring posts and another for commenting. The user account he uses for some commenting has a linked Gravatar account with email and photo and the other does not.
More tips?
Want more information on how you can better own and manage your online identity? Visit IndieWeb.org: “A people-focused alternative to the ‘corporate web’.”
TL;DR
To help beautify your web presence a bit, if you notice that your photo doesn’t show up in the author block or comments in your theme, you can (create and) use your WordPress.com username/password in an account on their sister site Gravatar.com. Uploading your preferred photo on Gravatar and linking it to an email will help to automatically populate your photo in both your site and other WordPress sites (in comments) across the web. To make it work on your site, just go to your user profile in your WordPress install and use the same email address in your user profile as your Gravatar account and the decentralized system will port your picture across automatically. If necessary, you can use multiple photos and multiple linked email addresses in your Gravatar account to vary your photos.
For several years now, I’ve been meaning to do something more interesting with the notes, highlights, and marginalia from the various books I read. In particular, I’ve specifically been meaning to do it for the non-fiction I read for research, and even more so for e-books, which tend to have slightly more extract-able notes given their electronic nature. This fits in to the way in which I use this site as a commonplace book as well as the IndieWeb philosophy to own all of one’s own data.[1]
Over the past month or so, I’ve been experimenting with some fiction to see what works and what doesn’t in terms of a workflow for status updates around reading books, writing book reviews, and then extracting and depositing notes, highlights, and marginalia online. I’ve now got a relatively quick and painless workflow for exporting the book related data from my Amazon Kindle and importing it into the site with some modest markup and CSS for display. I’m sure the workflow will continue to evolve (and further automate) somewhat over the coming months, but I’m reasonably happy with where things stand.
The fact that the Amazon Kindle allows for relatively easy highlighting and annotation in e-books is excellent, but having the ability to sync to a laptop and do a one click export of all of that data, is incredibly helpful. Adding some simple CSS to the pre-formatted output gives me a reasonable base upon which to build for future writing/thinking about the material. In experimenting, I’m also coming to realize that simply owning the data isn’t enough, but now I’m driven to help make that data more directly useful to me and potentially to others.
As part of my experimenting, I’ve just uploaded some notes, highlights, and annotations for David Christian’s excellent text Maps of Time: An Introduction to Big History[2] which I read back in 2011/12. While I’ve read several of the references which I marked up in that text, I’ll have to continue evolving a workflow for doing all the related follow up (and further thinking and writing) on the reading I’ve done in the past.
I’m still reminded me of Rick Kurtzman’s sage advice to me when I was a young pisher at CAA in 1999: “If you read a script and don’t tell anyone about it, you shouldn’t have wasted the time having read it in the first place.” His point was that if you don’t try to pass along the knowledge you found by reading, you may as well give up. Even if the thing was terrible, at least say that as a minimum. In a digitally connected era, we no longer need to rely on nearly illegible scrawl in the margins to pollinate the world at a snail’s pace.[4] Take those notes, marginalia, highlights, and meta data and release it into the world. The fact that this dovetails perfectly with Cesar Hidalgo’s thesis in Why Information Grows: The Evolution of Order, from Atoms to Economies,[3] furthers my belief in having a better process for what I’m attempting here.
Hopefully in the coming months, I’ll be able to add similar data to several other books I’ve read and reviewed here on the site.
If anyone has any thoughts, tips, tricks for creating/automating this type of workflow/presentation, I’d love to hear them in the comments!
There is a relatively new candidate recommendation from the W3C for a game changing social web specification called Webmention which essentially makes it possible to do Twitter-like @mentions (or Medium-style) across the internet from site to site (as opposed to simply within a siloed site/walled garden like Twitter).
Webmentions would allow me to write a comment to someone else’s post on my own Tumblr site, for example, and then with a URL of the site I’m replying to in my post which serves as the @mention, the other site (which could be on WordPress, Drupal, Tumblr, or anything really) which also supports Webmentions could receive my comment and display it in their comment section.
Given the tremendous number of sites (and multi-platform sites) on which Disqus operates, it would be an excellent candidate to support the Webmention spec to allow a huge amount of inter-site activity on the internet. First it could include the snippet of code for allowing the site on which a comment is originally written to send Webmentions and secondly, it could allow for the snippet of code which allows for receiving Webmentions. The current Disqus infrastructure could also serve to reduce spam and display those comments in a pretty way. Naturally Disqus could continue to serve the same social functionality it has in the past.
Aggregating the conversation across the Internet into one place
Making things even more useful, there’s currently a third party free service called Brid.gy which uses open APIs of Twitter, Facebook, Instagram, Google+, and Flickr to bootstrap them to send these Webmentions or inter-site @mentions. What does this mean? After signing up at Bridgy, it means I could potentially create a post on my Disqus-enabled Tumblr (WordPress, or other powered site), share that post with its URL to Facebook, and any comments or likes made on the Facebook post will be sent as Webmentions to the comments section on my Tumblr site as if they’d been made there natively. (Disqus could add the metadata to indicate the permalink and location of where the comment originated.) This means I can receive comments on my blog/site from Twitter, Facebook, Instagram, G+, etc. without a huge amount of overhead, and even better, instead of being spread out in multiple different places, the conversation around my original piece of content could be conglomerated with the original!
Comments could be displayed inline naturally, and likes could be implemented as UI facepile either above or below the typical comment section. By enabling the sending/receiving of Webmentions, Disqus could further corner the market on comments. Even easier for Disqus, a lot of the code has already been written and is open source .
Web 3.0?
I believe that Webmention, when implemented, is going to cause a major sea-change in the way people use the web. Dare I say Web3.0?!
Over the years I almost feel like I’ve tried to max out the number of web services I could sign up for. I was always on the look out for that new killer app or social service, so I’ve tried almost all of them at one point or another. That I can remember, I’ve had at least 179, and likely there are very many more that I’m simply forgetting. Research indicates it is difficult enough to keep track of 150 people, much less that many people through that many websites.
As an exercise, I’ve made an attempt to list all of the social media and user accounts I’ve had on the web since the early/mid-2000s. They’re listed below at the bottom of this post and broken up somewhat by usage area and subject for ease of use. I’ll maintain an official list of them here.
This partial list may give many others the opportunity to see how fragmented their own identities can be on the web. Who are you and to which communities because you live in multiple different places? I feel the list also shows the immense value inherent in the IndieWeb philosophy to own one’s own domain and data. The value of the IndieWeb is even more apparent when I think of all the defunct, abandoned, shut down, or bought out web services I’ve used which I’ve done my best to list at the bottom.
When I think of all the hours of content that I and others have created and shared on some of these defunct sites for which we’ll never recover the data, I almost want to sob. Instead, I’ve promised only to cry, “Never again!” People interested in more of the vast volumes of data lost are invited to look at this list of site-deaths, which is itself is far from comprehensive.
No more digital sharecropping
Over time, I’ll make an attempt, where possible, to own the data from each of the services listed below and port it here to my own domain. More importantly, I refuse to do any more digital sharecropping. I’m not creating new posts, status updates, photos, or other content that doesn’t live on my own site first. Sure I’ll take advantage of the network effects of popular services like Twitter, Facebook, and Instagram to engage my family, friends, and community who choose to live in those places, but it will only happen by syndicating data that I already own to those services after-the-fact.
What about the interactive parts? The comments and interactions on those social services?
Through the magic of new web standards like WebMention, essentially an internet wide @mention functionality similar to that on Twitter, Medium, and even Facebook, and a fantastic service called brid.gy, all the likes and comments from Twitter, Facebook, Google+, Instagram, and others, I get direct notifications of the comments on my syndicated material which comes back directly to my own website as comments on the original posts. Those with websites that support WebMention natively can write their comments to my posts directly on their own site and rely on it to automatically notify me of their response.
Isn’t this beginning to sound to you like the way the internet should work?
One URL to rule them all
When I think back on setting up these hundreds of digital services, I nearly wince at all the time and effort I’ve spent inputting my name, my photo, or even just including URL links to my Facebook and Twitter accounts.
Now I have one and only one URL that I can care about and pay attention to: my own!
Join me for IndieWebCamp Los Angeles
I’ve written in bits about my involvement with the IndieWeb in the past, but I’ve actually had incoming calls over the past several weeks from people interested in setting up their own websites. Many have asked: what is it exactly? how can they do something similar? is it hard?
My answer is that it isn’t nearly as hard as you might have thought. If you can manage to sign up and maintain your Facebook account, you can put together all the moving parts to have your own IndieWeb enabled website.
“But, Chris, I’m still a little hesitant…”
Okay, how about I (and many others) offer to help you out? I’m going to be hosting IndieWebCamp Los Angeles over the weekend of November 5th and 6th in Santa Monica. I’m inviting you all to attend with the hope that by the time the weekend is over, you’ll have not only a good significant start, but you’ll have the tools, resources, and confidence to continue building in improvements over time.
IndieWebCamp Los Angeles
<
div class=”p-location h-card”>Pivotal 1333 2nd Street, Suite 200 Santa Monica, CA, 90401 United States
It may take me a week or so to finish putting some general thoughts and additional resources together based on the two day conference so that I might give a more thorough accounting of my opinions as well as next steps. Until then, I hope that the details and mini-archive of content below may help others who attended, or provide a resource for those who couldn’t make the conference.
Overall, it was an incredibly well programmed and run conference, so kudos to all those involved who kept things moving along. I’m now certainly much more aware at the gaping memory hole the internet is facing despite the heroic efforts of a small handful of people and institutions attempting to improve the situation. I’ll try to go into more detail later about a handful of specific topics and next steps as well as a listing of resources I came across which may provide to be useful tools for both those in the archiving/preserving and IndieWeb communities.
Archive of materials for Day 2
Audio Files
Below are the recorded audio files embedded in .m4a format (using a Livescribe Pulse Pen) for several sessions held throughout the day. To my knowledge, none of the breakout sessions were recorded except for the one which appears below.
Summarizing archival collections using storytelling techniques
Presentation: Summarizing archival collections using storytelling techniques by Michael Nelson, Ph.D., Old Dominion University
Saving the first draft of history
Special guest speaker: Saving the first draft of history: The unlikely rescue of the AP’s Vietnam War files by Peter Arnett, winner of the Pulitzer Prize for journalism
Kiss your app goodbye: the fragility of data journalism
Panel: Kiss your app goodbye: the fragility of data journalism
Featuring Meredith Broussard, New York University; Regina Lee Roberts, Stanford University; Ben Welsh, The Los Angeles Times; moderator Martin Klein, Ph.D., Los Alamos National Laboratory
The future of the past: modernizing The New York Times archive
Panel: The future of the past: modernizing The New York Times archive
Featuring The New York Times Technology Team: Evan Sandhaus, Jane Cotler and Sophia Van Valkenburg; moderated by Edward McCain, RJI and MU Libraries
Lightning Rounds: Six Presenters
Lightning rounds (in two parts)
Six + one presenters: Jefferson Bailey, Terry Britt, Katherine Boss (and team), Cynthia Joyce, Mark Graham, Jennifer Younger and Kalev Leetaru
1: Jefferson Bailey, Internet Archive, “Supporting Data-Driven Research using News-Related Web Archives” 2: Terry Britt, University of Missouri, “News archives as cornerstones of collective memory” 3: Katherine Boss, Meredith Broussard and Eva Revear, New York University: “Challenges facing preservation of born-digital news applications” 4: Cynthia Joyce, University of Mississippi, “Keyword ‘Katrina’: Re-collecting the unsearchable past” 5: Mark Graham, Internet Archive/The Wayback Machine, “Archiving news at the Internet Archive” 6: Jennifer Younger, Catholic Research Resources Alliance: “Digital Preservation, Aggregated, Collaborative, Catholic” 7. Kalev Leetaru, senior fellow, The George Washington University and founder of the GDELT Project: A Look Inside The World’s Largest Initiative To Understand And Archive The World’s News
Technology and Community
Presentation: Technology and community: Why we need partners, collaborators, and friends by Kate Zwaard, Library of Congress
Breakout: Working with CMS
Working with CMS, led by Eric Weig, University of Kentucky
Alignment and reciprocity
Alignment & reciprocity by Katherine Skinner, Ph.D., executive director, the Educopia Institute
Closing remarks
Closing remarks by Edward McCain, RJI and MU Libraries and Todd Grappone, associate university librarian, UCLA
Live Tweet Archive
Reminder: In many cases my tweets don’t reflect direct quotes of the attributed speaker, but are often slightly modified for clarity and length for posting to Twitter. I have made a reasonable attempt in all cases to capture the overall sentiment of individual statements while using as many original words of the participant as possible. Typically, for speed, there wasn’t much editing of these notes. Below I’ve changed the attribution of one or two tweets to reflect the proper person(s). Fore convenience, I’ve also added a few hyperlinks to useful resources after the fact that didn’t have time to make the original tweets. I’ve attached .m4a audio files of most of the audio for the day (apologies for shaky quality as it’s unedited) which can be used for more direct attribution if desired. The Reynolds Journalism Institute videotaped the entire day and livestreamed it. Presumably they will release the video on their website for a more immersive experience.
Peter Arnett:
Condoms were required issue in Vietnam–we used them to waterproof film containers in the field.
Do not stay close to the head of a column, medics, or radiomen. #warreportingadvice
I told the AP I would undertake the task of destroying all the reporters’ files from the war.
Instead the AP files moved around with me.
Eventually the 10 trunks of material went back to the AP when they hired a brilliant archivist.
“The negatives can outweigh the positives when you’re in trouble.”
Today I spent most of the majority of the day attending the first of a two day conference at UCLA’s Charles Young Research Library entitled “Dodging the Memory Hole: Saving Online News.” While I knew mostly what I was getting into, it hadn’t really occurred to me how much of what is on the web is not backed up or archived in any meaningful way. As a part of human nature, people neglect to back up any of their data, but huge swaths of really important data with newsworthy and historic value is being heavily neglected. Fortunately it’s an interesting enough problem to draw the 100 or so scholars, researchers, technologists, and journalists who showed up for the start of an interesting group being conglomerated through the Reynolds Journalism Institute and several sponsors of the event.
What particularly strikes me is how many of the philosophies of the IndieWeb movement and tools developed by it are applicable to some of the problems that online news faces. I suspect that if more journalists were practicing members of the IndieWeb and used their sites not only for collecting and storing the underlying data upon which they base their stories, but to publish them as well, then some of the (future) archival process may be easier to accomplish. I’ve got so many disparate thoughts running around my mind after the first day that it’ll take a bit of time to process before I write out some more detailed thoughts.
Twitter List for the Conference
As a reminder to those attending, I’ve accumulated a list of everyone who’s tweeted with the hashtag #DtMH2016, so that attendees can more easily follow each other as well as communicate online following our few days together in Los Angeles. Twitter also allows subscribing to entire lists too if that’s something in which people have interest.
Archiving the day
It seems only fitting that an attendee of a conference about saving and archiving digital news, would make a reasonable attempt to archive some of his experience right?! Toward that end, below is an archive of my tweetstorm during the day marked up with microformats and including hovercards for the speakers with appropriate available metadata. For those interested, I used a fantastic web app called Noter Live to capture, tweet, and more easily archive the stream.
Note that in many cases my tweets don’t reflect direct quotes of the attributed speaker, but are often slightly modified for clarity and length for posting to Twitter. I have made a reasonable attempt in all cases to capture the overall sentiment of individual statements while using as many original words of the participant as possible. Typically, for speed, there wasn’t much editing of these notes. I’m also attaching .m4a audio files of most of the audio for the day (apologies for shaky quality as it’s unedited) which can be used for more direct attribution if desired. The Reynolds Journalism Institute videotaped the entire day and livestreamed it. Presumably they will release the video on their website for a more immersive experience.
If you prefer to read the stream of notes in the original Twitter format, so that you can like/retweet/comment on individual pieces, this link should give you the entire stream. Naturally, comments are also welcome below.
Audio Files
Below are the audio files for several sessions held throughout the day.
Greetings and Keynote
Greetings: Edward McCain, digital curator of journalism, Donald W. Reynolds Journalism Institute (RJI) and University of Missouri Libraries and Ginny Steel, university librarian, UCLA
Keynote: Digital salvage operations — what’s worth saving? given by Hjalmar Gislason, vice president of data, Qlik
Why save online news? and NewsScape
Panel: “Why save online news?” featuring Chris Freeland, Washington University; Matt Weber, Ph.D., Rutgers, The State University of New Jersey; Laura Wrubel, The George Washington University; moderator Ana Krahmer, Ph.D., University of North Texas
Presentation: “NewsScape: preserving TV news” given by Tim Groeling, Ph.D., UCLA Communication Studies Department
Born-digital news preservation in perspective
Speaker: Clifford Lynch, Ph.D., executive director, Coalition for Networked Information on “Born-digital news preservation in perspective”