Category: IndieWeb
Give your web presence a more personal identity
but…
Invariably the blog’s author has a generic avatar (blech!) instead of a nice, warm and humanizing photo of their lovely face.
Or, perhaps, as a user, you’ve always wondered how some people qualified to have their photo included with their comment while you were left as an anonymous looking “mystery person” or a randomized identicon, monster, or even an 8-bit pixelated blob? The secret the others know will be revealed momentarily.
Which would you prefer?



Somehow, knowing how to replace that dreadful randomized block with an actual photo is too hard or too complicated. Why? In part, it’s because WordPress separated out this functionality as a decentralized service called Gravatar, which stands for Globally Recognized Avatar. In some sense this is an awesome idea because then people everywhere (and not just on WordPress) can use the Gravatar service to change their photo across thousands of websites at once. Unfortunately it’s not always clear that one needs to add their name, email address, and photo to Gravatar in order for the avatars to be populated properly on WordPress related sites.
(Suggestion for WordPress: Maybe the UI within the user account section could include a line about Gravatars?)
So instead of trying to write out the details for the third time this week, I thought I’d write it once here with a bit more detail and then point people to it for the future.
Another quick example
Can you guess which user is the blog’s author in the screencapture?
The correct answer is Anand Sarwate, the second commenter in the list. While Anand’s avatar seems almost custom made for a blog on randomness and information theory, it would be more inviting if he used a photo instead.
How to fix the default avatar problem
What is Gravatar?
Your Gravatar is an image that follows you from site to site appearing beside your name when you do things like comment or post on a blog. Avatars help identify your posts on blogs and web forums, so why not on any site?
Need some additional motivation? Watch this short video:
[wpvideo HNyK67JS]
Step 1: Get a Gravatar Account
If you’ve already got a WordPress.com account, this step is easy. Because the same corporate parent built both WordPress and Gravatar, if you have an account on one, you automattically have an account on the other which uses the same login information. You just need to log into Gravatar.com with your WordPress username and password.
If you don’t have a WordPress.com account or even a blog, but just want your photo to show up when you comment on WordPress and other Gravatar enabled blogs, then just sign up for an account at Gravatar.com. When you comment on a blog, it’ll ask for your email address and it will use that to pull in the photo to which it’s linked.
Step 2: Add an email address
Log into your Gravatar account. Choose an email address you want to modify: you’ll have at least the default you signed up with or you can add additional email addresses.
Step 3: Add a photo to go with that email address
Upload as many photos as you’d like into the account. Then for each of the email addresses you’ve got, associate each one with at least one of your photos.
Example: In the commenters’ avatars shown above, Anand was almost there. He already had a Gravatar account, he just hadn’t added any photos.
Step 4: Fill out the rest of your social profile
Optionally you can additional social details like a short bio, your other social media presences, and even one or more websites or blogs that you own.
Step 5: Repeat
You can add as many emails and photos as you’d like. By linking different photos to different email addresses, you’ll be able to change your photo identity based on the email “key” you plug into sites later.
If you get tired of one photo, just upload another and make it the default photo for the email addresses you want it to change for. All sites using Gravatar will update your avatar for use in the future.
Step 6: Use your email address on your WordPress account
Now, go back to the user profile section on your blog, which is usually located at http://www.YOURSITE.com/wp-admin/users.php.

In the field for the email, input (one of) the email(s) you used in Gravatar that’s linked to a photo.
Don’t worry, the system won’t show your email and it will remain private–WordPress and Gravatar simply use it as a common “key” to serve up the right photo and metadata from Gravatar to the WordPress site.
Once you’ve clicked save, your new avatar should show up in the list of users. More importantly it’ll now show up in all of the WordPress elements (like most author bio blocks and in comments) that appear on your site.
Administrator Caveats
WordPress themes need to be Gravatar enabled to be able to use this functionality, but in practice, most of them do, particularly for comments sections. If yours isn’t, then you can usually add it with some simple code.
In the WordPress admin interface one can go to Settings>>Discussion
and enable View people's profiles when you mouse over their Gravatars
under the heading “Gravatar Hovercards” to enable people to see more information about you and the commenters on your blog (presuming the comment section of your theme is Gravatar enabled.)
Some WordPress users often have several user accounts that they use to administer their site. One might have a secure administrator account they only use for updates and upgrades, another personal account (author/editor admin level account which uses their name) for authoring posts, and another (author/editor admin level) account for making admin notice posts or commenting as a generic moderator. In these cases, you need to make sure that each of these accounts has an email address with an an associated Gravatar account with the same email and the desired photo linked to it. (One Gravatar account with multiple emails/photos will usually suffice, though they could be different.)
Example: In Nate’s case above, we showed that his photo didn’t show in the author bio box, and it doesn’t show up in some comments, but it does show up in other comments on his blog. This is because he uses at least two different user accounts: one for authoring posts and another for commenting. The user account he uses for some commenting has a linked Gravatar account with email and photo and the other does not.

More tips?
Want more information on how you can better own and manage your online identity? Visit IndieWeb.org: “A people-focused alternative to the ‘corporate web’.
”
TL;DR
To help beautify your web presence a bit, if you notice that your photo doesn’t show up in the author block or comments in your theme, you can (create and) use your WordPress.com username/password in an account on their sister site Gravatar.com. Uploading your preferred photo on Gravatar and linking it to an email will help to automatically populate your photo in both your site and other WordPress sites (in comments) across the web. To make it work on your site, just go to your user profile in your WordPress install and use the same email address in your user profile as your Gravatar account and the decentralized system will port your picture across automatically. If necessary, you can use multiple photos and multiple linked email addresses in your Gravatar account to vary your photos.
Notes, Highlights, and Marginalia: From E-books to Online
Over the past month or so, I’ve been experimenting with some fiction to see what works and what doesn’t in terms of a workflow for status updates around reading books, writing book reviews, and then extracting and depositing notes, highlights, and marginalia online. I’ve now got a relatively quick and painless workflow for exporting the book related data from my Amazon Kindle and importing it into the site with some modest markup and CSS for display. I’m sure the workflow will continue to evolve (and further automate) somewhat over the coming months, but I’m reasonably happy with where things stand.
The fact that the Amazon Kindle allows for relatively easy highlighting and annotation in e-books is excellent, but having the ability to sync to a laptop and do a one click export of all of that data, is incredibly helpful. Adding some simple CSS to the pre-formatted output gives me a reasonable base upon which to build for future writing/thinking about the material. In experimenting, I’m also coming to realize that simply owning the data isn’t enough, but now I’m driven to help make that data more directly useful to me and potentially to others.
As part of my experimenting, I’ve just uploaded some notes, highlights, and annotations for David Christian’s excellent text Maps of Time: An Introduction to Big History[2] which I read back in 2011/12. While I’ve read several of the references which I marked up in that text, I’ll have to continue evolving a workflow for doing all the related follow up (and further thinking and writing) on the reading I’ve done in the past.
I’m still reminded me of Rick Kurtzman’s sage advice to me when I was a young pisher at CAA in 1999: “If you read a script and don’t tell anyone about it, you shouldn’t have wasted the time having read it in the first place.”
His point was that if you don’t try to pass along the knowledge you found by reading, you may as well give up. Even if the thing was terrible, at least say that as a minimum. In a digitally connected era, we no longer need to rely on nearly illegible scrawl in the margins to pollinate the world at a snail’s pace.[4] Take those notes, marginalia, highlights, and meta data and release it into the world. The fact that this dovetails perfectly with Cesar Hidalgo’s thesis in Why Information Grows: The Evolution of Order, from Atoms to Economies,[3] furthers my belief in having a better process for what I’m attempting here.
Hopefully in the coming months, I’ll be able to add similar data to several other books I’ve read and reviewed here on the site.
If anyone has any thoughts, tips, tricks for creating/automating this type of workflow/presentation, I’d love to hear them in the comments!
Footnotes
A Case for Why Disqus Should Implement Webmentions
Internet-wide @Mentions
There is a relatively new candidate recommendation from the W3C for a game changing social web specification called Webmention which essentially makes it possible to do Twitter-like @mentions (or Medium-style) across the internet from site to site (as opposed to simply within a siloed site/walled garden like Twitter).
Webmentions would allow me to write a comment to someone else’s post on my own Tumblr site, for example, and then with a URL of the site I’m replying to in my post which serves as the @mention, the other site (which could be on WordPress, Drupal, Tumblr, or anything really) which also supports Webmentions could receive my comment and display it in their comment section.
Given the tremendous number of sites (and multi-platform sites) on which Disqus operates, it would be an excellent candidate to support the Webmention spec to allow a huge amount of inter-site activity on the internet. First it could include the snippet of code for allowing the site on which a comment is originally written to send Webmentions and secondly, it could allow for the snippet of code which allows for receiving Webmentions. The current Disqus infrastructure could also serve to reduce spam and display those comments in a pretty way. Naturally Disqus could continue to serve the same social functionality it has in the past.
Aggregating the conversation across the Internet into one place
Making things even more useful, there’s currently a third party free service called Brid.gy which uses open APIs of Twitter, Facebook, Instagram, Google+, and Flickr to bootstrap them to send these Webmentions or inter-site @mentions. What does this mean? After signing up at Bridgy, it means I could potentially create a post on my Disqus-enabled Tumblr (WordPress, or other powered site), share that post with its URL to Facebook, and any comments or likes made on the Facebook post will be sent as Webmentions to the comments section on my Tumblr site as if they’d been made there natively. (Disqus could add the metadata to indicate the permalink and location of where the comment originated.) This means I can receive comments on my blog/site from Twitter, Facebook, Instagram, G+, etc. without a huge amount of overhead, and even better, instead of being spread out in multiple different places, the conversation around my original piece of content could be conglomerated with the original!
Comments could be displayed inline naturally, and likes could be implemented as UI facepile either above or below the typical comment section. By enabling the sending/receiving of Webmentions, Disqus could further corner the market on comments. Even easier for Disqus, a lot of the code has already been written and is open source .
Web 3.0?
I believe that Webmention, when implemented, is going to cause a major sea-change in the way people use the web. Dare I say Web3.0?!
How many social media related accounts can one person have on the web?!
As an exercise, I’ve made an attempt to list all of the social media and user accounts I’ve had on the web since the early/mid-2000s. They’re listed below at the bottom of this post and broken up somewhat by usage area and subject for ease of use. I’ll maintain an official list of them here.
This partial list may give many others the opportunity to see how fragmented their own identities can be on the web. Who are you and to which communities because you live in multiple different places? I feel the list also shows the immense value inherent in the IndieWeb philosophy to own one’s own domain and data. The value of the IndieWeb is even more apparent when I think of all the defunct, abandoned, shut down, or bought out web services I’ve used which I’ve done my best to list at the bottom.
When I think of all the hours of content that I and others have created and shared on some of these defunct sites for which we’ll never recover the data, I almost want to sob. Instead, I’ve promised only to cry, “Never again!” People interested in more of the vast volumes of data lost are invited to look at this list of site-deaths, which is itself is far from comprehensive.
No more digital sharecropping
Over time, I’ll make an attempt, where possible, to own the data from each of the services listed below and port it here to my own domain. More importantly, I refuse to do any more digital sharecropping. I’m not creating new posts, status updates, photos, or other content that doesn’t live on my own site first. Sure I’ll take advantage of the network effects of popular services like Twitter, Facebook, and Instagram to engage my family, friends, and community who choose to live in those places, but it will only happen by syndicating data that I already own to those services after-the-fact.
What about the interactive parts? The comments and interactions on those social services?
Through the magic of new web standards like WebMention, essentially an internet wide @mention functionality similar to that on Twitter, Medium, and even Facebook, and a fantastic service called brid.gy, all the likes and comments from Twitter, Facebook, Google+, Instagram, and others, I get direct notifications of the comments on my syndicated material which comes back directly to my own website as comments on the original posts. Those with websites that support WebMention natively can write their comments to my posts directly on their own site and rely on it to automatically notify me of their response.
Isn’t this beginning to sound to you like the way the internet should work?
One URL to rule them all
When I think back on setting up these hundreds of digital services, I nearly wince at all the time and effort I’ve spent inputting my name, my photo, or even just including URL links to my Facebook and Twitter accounts.
Now I have one and only one URL that I can care about and pay attention to: my own!
Join me for IndieWebCamp Los Angeles
I’ve written in bits about my involvement with the IndieWeb in the past, but I’ve actually had incoming calls over the past several weeks from people interested in setting up their own websites. Many have asked: what is it exactly? how can they do something similar? is it hard?
My answer is that it isn’t nearly as hard as you might have thought. If you can manage to sign up and maintain your Facebook account, you can put together all the moving parts to have your own IndieWeb enabled website.
“But, Chris, I’m still a little hesitant…”
Okay, how about I (and many others) offer to help you out? I’m going to be hosting IndieWebCamp Los Angeles over the weekend of November 5th and 6th in Santa Monica. I’m inviting you all to attend with the hope that by the time the weekend is over, you’ll have not only a good significant start, but you’ll have the tools, resources, and confidence to continue building in improvements over time.
IndieWebCamp Los Angeles
<
div class=”p-location h-card”>Pivotal
1333 2nd Street,
Suite 200
Santa Monica, CA,
90401
United States
When
- Saturday:
- Sunday:
R.S.V.P.
We’ve set up a variety of places for people to easily R.S.V.P. for the two-day event, choose the one that’s convenient for you:
* Eventbrite: https://www.eventbrite.com/e/indiewebcamp-la-2016-tickets-24335345674
* Lanyrd: http://lanyrd.com/2016/indiewebcamp-la
* Facebook: https://www.facebook.com/events/1701240643421269
* Meetup: https://www.meetup.com/IndieWeb-Homebrew-Website-Club-Los-Angeles/events/233698594/
If you’ve already got an IndieWeb enabled website and are able to R.S.V.P. by using your own site, try one of the following two R.S.V.P. locations:
* Indie Event: http://veganstraightedge.com/events/2016/04/01/indiewebcamp-la-2016
* IndieWeb Wiki: https://indieweb.org/2016/LA/Guest_List
I hope to see you there!
Now for that unwieldly list of sites I’ve spent untold hours setting up and maintaining…
Editor’s note:
A regularly updated version of this list is maintained here.
Primary Internet Presences
Chris Aldrich | BoffoSocko
Chris Aldrich Social Stream
Content from the above two sites is syndicated primarily, but not exclusively, or evenly to the following silo-based profiles
Facebook
Twitter
Google+
Tumblr
LinkedIn
Medium
GoodReads
Foursquare
YouTube
Reddit
Flickr
WordPress.com
Contributor to
WithKnown (Dormant)
IndieWeb.org (Wiki)
Little Free Library #8424 Blog
Mendeley ITBio References
Chris Aldrich Radio3 (Link Blog)
Category Theory Summer Study Group
JHU AEME
Johns Hopkins Twitter Feed (Previous)
JHU Facebook Fan Page (Previous)
Identity
Gravatar
Keybase
About.Me
DandyID
Vizify
Other Social Profiles
Yelp
Findery
Periscope
Pinterest
Storify
MeetUp
500px
Skitch
KickStarter
Patreon
TwitPic
StumbleUpon
del.icio.us
MySpace
Klout
Academia / Research Related
Mendeley
Academia.edu
Research Gate
IEEE Information Theory Society (ITSOC)
Quora
ORCID
Hypothes.is
Genius (fka Rap Genius, aka News Genius, etc)
Diigo
FigShare – Research Data
Zotero
Worldcat
OdySci – Engineering Research
CiteULike
Open Study
StackExchange
Math-Stackexchange
MathOverflow
TeX-StackExchange
Theoretical Physics-StackExchange
Linguistics-StackExchange
Digital Signal Processing-StackExchange
Cooking-StackExchange
Physics Forums
Sciencescape
MOOC Related
Reading Related
GoodReads
Pocket
Flipboard
Book Crossing
Digg
Readlist
MobileRead
Read Fold
ReadingPack
SlideShare
Wordnik
Milq
Disqus (Comments)
Intense Debate (Comments)
Wattpad
BookVibe
Reading.am (Bookmarking)
Amazon Profile
Wishlist: Evolutionary Theory
Wishlist: Information Theory
Wishlist: Mathematics
Camp NaNoWriMo
NaNoWriMo
Programming Related
GitHub
BitBucket
GitLab – URL doesn’t resolve to account
Free Code Camp
Code School
Codepen
Audio / Video
Huffduffer
Last.fm
Spotify
Pandora (Radio)
Soundcloud
Vimeo
Rdio
IMDb
Telfie (TV Checkin)
Soundtracking
Hulu
UStream
Livestream
MixCloud
Spreaker
Audioboo (Audio)
Bambuser (Video)
Orfium
The Session (Irish Music)
Food / Travel / Meetings
Nosh
FoodSpotting
Tripit (Travel)
Lanyard (Conference)
Conferize (Conference)
Miscellaneous
RebelMouse (unused)
Peach (app only)
Kinja (commenting system/pseudo-blog)
Mnemotechniques (Memory Forum)
WordPress.org
Ask.fm
AppBrain Android Phone Apps
BlogCatalog
MySpace (Old School)
Identi.ca (Status)
Plurk (Status)
TinyLetter
Plaxo
YCombinator
Tsu
NewGov.US
Venmo
Quitter.se (Status)
Quitter.no (Status)
ColoUrLovers
Beeminder
Defunct Social Sites
Picasa (Redirects to G+)
Eat.ly (Food Blog)
Google Sidewiki (Annotation)
Wakoopa (Software usage)
Seesmic (Video, Status)
Jaiku (Status)
Friendster (Social Media)
Flipzu
Mixx
<a href=”http://getglue.com/chrisaldrich” target=”_blank rel=”" noopener noreferrer”>GetGlue (Video checkin)
FootFeed (Location)
Google Reader (Reader)
CinchCast (Audio)
Backtype (Commenting)
Tungle.me (Calendar)
Chime.In (Status)
MyBigCampus (College related)
Pownce (Status) – closed 02/09
Cliqset (Status) – closed 11/22/10
Brightkite (Location/Status) – closed 12/10/10
Buzz (Status) – closed 12/15/11
Gowalla (Location) – closed 3/11/12
Picplz (Photo)- closed 9/2/12
Posterous (Blog) – closed 4/30/13 [all content from this site has been recovered and ported]
Upcoming (Calendar) – closed 4/30/13
ClaimID (Identity) – closed 12/12/13
Qik (Video) – closed 4/30/14
Readmill (Reading)- closed 7/1/14
Orkut (Status) – closed 9/1/14
Plinky – closed 9/1/14
FriendFeed (Social Networking)- closed 4/10/15
Plancast (Calendar) – closed 1/21/16
Symantec Personal Identity Program (Identity) – closing 9/11/16
Shelfari (Reading) – closed 3/16/16
How many social media identities do YOU have?
Notes from Day 2 of Dodging the Memory Hole: Saving Online News | Friday, October 14, 2016
It may take me a week or so to finish putting some general thoughts and additional resources together based on the two day conference so that I might give a more thorough accounting of my opinions as well as next steps. Until then, I hope that the details and mini-archive of content below may help others who attended, or provide a resource for those who couldn’t make the conference.
Overall, it was an incredibly well programmed and run conference, so kudos to all those involved who kept things moving along. I’m now certainly much more aware at the gaping memory hole the internet is facing despite the heroic efforts of a small handful of people and institutions attempting to improve the situation. I’ll try to go into more detail later about a handful of specific topics and next steps as well as a listing of resources I came across which may provide to be useful tools for both those in the archiving/preserving and IndieWeb communities.
Archive of materials for Day 2
Audio Files
Below are the recorded audio files embedded in .m4a format (using a Livescribe Pulse Pen) for several sessions held throughout the day. To my knowledge, none of the breakout sessions were recorded except for the one which appears below.
Summarizing archival collections using storytelling techniques
Presentation: Summarizing archival collections using storytelling techniques by Michael Nelson, Ph.D., Old Dominion University
Saving the first draft of history
Special guest speaker: Saving the first draft of history: The unlikely rescue of the AP’s Vietnam War files by Peter Arnett, winner of the Pulitzer Prize for journalism
Kiss your app goodbye: the fragility of data journalism
Panel: Kiss your app goodbye: the fragility of data journalism
Featuring Meredith Broussard, New York University; Regina Lee Roberts, Stanford University; Ben Welsh, The Los Angeles Times; moderator Martin Klein, Ph.D., Los Alamos National Laboratory
The future of the past: modernizing The New York Times archive
Panel: The future of the past: modernizing The New York Times archive
Featuring The New York Times Technology Team: Evan Sandhaus, Jane Cotler and Sophia Van Valkenburg; moderated by Edward McCain, RJI and MU Libraries
Lightning Rounds: Six Presenters
Lightning rounds (in two parts)
Six + one presenters: Jefferson Bailey, Terry Britt, Katherine Boss (and team), Cynthia Joyce, Mark Graham, Jennifer Younger and Kalev Leetaru
1: Jefferson Bailey, Internet Archive, “Supporting Data-Driven Research using News-Related Web Archives” 2: Terry Britt, University of Missouri, “News archives as cornerstones of collective memory” 3: Katherine Boss, Meredith Broussard and Eva Revear, New York University: “Challenges facing preservation of born-digital news applications” 4: Cynthia Joyce, University of Mississippi, “Keyword ‘Katrina’: Re-collecting the unsearchable past” 5: Mark Graham, Internet Archive/The Wayback Machine, “Archiving news at the Internet Archive” 6: Jennifer Younger, Catholic Research Resources Alliance: “Digital Preservation, Aggregated, Collaborative, Catholic” 7. Kalev Leetaru, senior fellow, The George Washington University and founder of the GDELT Project: A Look Inside The World’s Largest Initiative To Understand And Archive The World’s News
Technology and Community
Presentation: Technology and community: Why we need partners, collaborators, and friends by Kate Zwaard, Library of Congress
Breakout: Working with CMS
Working with CMS, led by Eric Weig, University of Kentucky
Alignment and reciprocity
Alignment & reciprocity by Katherine Skinner, Ph.D., executive director, the Educopia Institute
Closing remarks
Closing remarks by Edward McCain, RJI and MU Libraries and Todd Grappone, associate university librarian, UCLA
Live Tweet Archive
Reminder: In many cases my tweets don’t reflect direct quotes of the attributed speaker, but are often slightly modified for clarity and length for posting to Twitter. I have made a reasonable attempt in all cases to capture the overall sentiment of individual statements while using as many original words of the participant as possible. Typically, for speed, there wasn’t much editing of these notes. Below I’ve changed the attribution of one or two tweets to reflect the proper person(s). Fore convenience, I’ve also added a few hyperlinks to useful resources after the fact that didn’t have time to make the original tweets. I’ve attached .m4a audio files of most of the audio for the day (apologies for shaky quality as it’s unedited) which can be used for more direct attribution if desired. The Reynolds Journalism Institute videotaped the entire day and livestreamed it. Presumably they will release the video on their website for a more immersive experience.
Condoms were required issue in Vietnam–we used them to waterproof film containers in the field.
Do not stay close to the head of a column, medics, or radiomen. #warreportingadvice
I told the AP I would undertake the task of destroying all the reporters’ files from the war.
Instead the AP files moved around with me.
Eventually the 10 trunks of material went back to the AP when they hired a brilliant archivist.
“The negatives can outweigh the positives when you’re in trouble.”
Our first panel:Kiss your app goodbye: the fragility of data jornalism
I teach data journalism at NYU
A news app is not what you’d install on your phone
Dollars for Docs is a good example of a news app
A news app is something that allows the user to put themself into the story.
Often there are three CMSs: web, print, and video.
News apps don’t live in any of the CMSs. They’re bespoke and live on a separate data server.
This has implications for crawlers which can’t handle them well.
Then how do we save news apps? We’re looking at examples and then generalizing.
Everyblock.com was a good example based on chicagocrime and later bought by NBC and shut down.
What?! The internet isn’t forever? Databases need to be save differently than web pages.
Reprozip was developed by NYU Center for Data and we’re using it to save the code, data, and environment.
My slides will be at http://bit.ly/frameworkfix. I work on the data desk @LATimes
We make apps that serve our audience.
We also make internal tools that empower the newsroom.
We also use our nerdy skills to do cool things.
Most of us aren’t good programmers, we “cheat” by using frameworks.
Frameworks do a lot of basic things for you, so you don’t have to know how to do it yourself.
Archiving tools often aren’t built into these frameworks.
Instagram, Pinterest, Mozilla, and the LA Times use django as our framework.
Memento for WordPress is a great way to archive pages.
We must do more. We need archiving baked into the systems from the start.
Slides at http://bit.ly/frameworkfix
Got data? I’m a librarian at Stanford University.
I’ll mention Christine Borgman’s book Big Data, Little Data, No data.
Journalists are great data liberators: FOIA requests, cleaning data, visualizing, getting stories out of data.
But what happens to the data once the story is published?
BLDR: Big Local Digital Repository, an open repository for sharing open data.
Solutions that exist: Hydra at http://projecthydra.org or Open ICPSR www.openicpsr.org
For metadata: www.ddialliance.org, RDF, International Image Interoperability Framework (iiif) and MODS
We’ll open up for questions.
What’s more important: obey copyright laws or preserving the content?
The new creative commons licenses are very helpful, but we have to be attentive to many issues.
Perhaps archiving it and embargoing for later?
Saving the published work is more important to me, and the rest of the byproduct is gravy.
I work for the New York Times, you may have heard of it…
Doing a quick demo of Times Machine from @NYTimes
Talking about modernizing the born-digital legacy content.
Our problem was how to make an article from 2004 look like it had been published today.
There were 100’s of thousands of articles missing.
There was no one definitive list of missing articles.
Outlining the workflow for reconciling the archive XML and the definitive list of URLs for conversion.
It’s important to use more than one source for building an archive.
I’m going to talk about all of “the little things” that came up along the way..
Article Matching: Fusion – How to convert print XML with web HTML that was scraped.
Primarily, we looked at common phrases between the corpus of the two different data sets.
We prioritized the print data over the digital data.
We maintain a system called switchboard that redirects from old URLs to the new ones to prevent link rot.
The case of the missing sections: some sections of the content were blank and not transcribed.
We made the decision of taking out data we had in lieu of making a better user experience for missing sections.
In the future, we’d also like to put photos back into the articles.
Modernizing and archiving the @NYTimes archives is an ongoing challenge.
Can you discuss the decision to go with a more modern interface rather than a traditional archive of how it looked?
Some of the decision was to get the data into an accessible format for modern users.
We do need to continue work on preserving the original experience.
Is there a way to distinguish between the print version and the online versions in the archive?
Could a researcher do work on the entire corpora? Is it available for subscription?
We do have a sub-section of data availalbe, but don’t have it prior to 1960.
Have you documented the process you’ve used on this preservation project?
We did save all of the code for the project within GitHub.
We do have meeting notes which provide some documentation, though they’re not thorough.
Oh dear. Of roughly 1,155 tweets I counted about #DtMH2016 in the last week, roughly 25% came from me. #noisy
Opensource tool I had mentioned to several: @wallabagapp A self-hostable application for saving web pages https://www.wallabag.org
Notes from Day 1 of Dodging the Memory Hole: Saving Online News | Thursday, October 13, 2016
What particularly strikes me is how many of the philosophies of the IndieWeb movement and tools developed by it are applicable to some of the problems that online news faces. I suspect that if more journalists were practicing members of the IndieWeb and used their sites not only for collecting and storing the underlying data upon which they base their stories, but to publish them as well, then some of the (future) archival process may be easier to accomplish. I’ve got so many disparate thoughts running around my mind after the first day that it’ll take a bit of time to process before I write out some more detailed thoughts.
Twitter List for the Conference
As a reminder to those attending, I’ve accumulated a list of everyone who’s tweeted with the hashtag #DtMH2016, so that attendees can more easily follow each other as well as communicate online following our few days together in Los Angeles. Twitter also allows subscribing to entire lists too if that’s something in which people have interest.
Archiving the day
It seems only fitting that an attendee of a conference about saving and archiving digital news, would make a reasonable attempt to archive some of his experience right?! Toward that end, below is an archive of my tweetstorm during the day marked up with microformats and including hovercards for the speakers with appropriate available metadata. For those interested, I used a fantastic web app called Noter Live to capture, tweet, and more easily archive the stream.
Note that in many cases my tweets don’t reflect direct quotes of the attributed speaker, but are often slightly modified for clarity and length for posting to Twitter. I have made a reasonable attempt in all cases to capture the overall sentiment of individual statements while using as many original words of the participant as possible. Typically, for speed, there wasn’t much editing of these notes. I’m also attaching .m4a audio files of most of the audio for the day (apologies for shaky quality as it’s unedited) which can be used for more direct attribution if desired. The Reynolds Journalism Institute videotaped the entire day and livestreamed it. Presumably they will release the video on their website for a more immersive experience.
If you prefer to read the stream of notes in the original Twitter format, so that you can like/retweet/comment on individual pieces, this link should give you the entire stream. Naturally, comments are also welcome below.
Audio Files
Below are the audio files for several sessions held throughout the day.
Greetings and Keynote
Greetings: Edward McCain, digital curator of journalism, Donald W. Reynolds Journalism Institute (RJI) and University of Missouri Libraries and Ginny Steel, university librarian, UCLA
Keynote: Digital salvage operations — what’s worth saving? given by Hjalmar Gislason, vice president of data, Qlik
Why save online news? and NewsScape
Panel: “Why save online news?” featuring Chris Freeland, Washington University; Matt Weber, Ph.D., Rutgers, The State University of New Jersey; Laura Wrubel, The George Washington University; moderator Ana Krahmer, Ph.D., University of North Texas
Presentation: “NewsScape: preserving TV news” given by Tim Groeling, Ph.D., UCLA Communication Studies Department
Born-digital news preservation in perspective
Speaker: Clifford Lynch, Ph.D., executive director, Coalition for Networked Information on “Born-digital news preservation in perspective”
Live Tweet Archive
Getting Noter Live fired up for Dodging the Memory Hole 2016: Saving Online News https://www.rjionline.org/dtmh2016
I’m glad I’m not at NBC trying to figure out the details for releasing THE APPRENTICE tapes.
Let’s thank @UCLA and the library for hosting us all.
While you’re here, don’t forget to vote/provide feedback throughout the day for IMLS
Someone once pulled up behind me and said “Hi Tiiiigeeerrr!” #Mizzou
A server at the Missourian crashed as the system was obsolete and running on baling wire. We lost 15 years of archives
The dean & head of Libraries created a position to save born digital news.
We’d like to help define stake-holder roles in relation to the problem.
Newspaper is really an outmoded term now.
I’d like to celebrate that we have 14 student scholars here today.
We’d like to have you identify specific projects that we can take to funding sources to begin work after the conference
We’ll be going to our first speaker who will be introduced by Martin Klein from Los Alamos.
Hjalmar Gislason is a self-described digital nerd. He’s the Vice President of Data.
I wonder how one becomes the President of Data?
My Icelandic name may be the most complicated part of my talk this morning.
Speaking on Digital Salvage Operations: What’s worth Saving”
My father in law accidentally threw away my wife’s favorite stuffed animal. #DeafTeddy
Some people just throw everything away because they’re not being used. Others keep everything and don’t throw it away.
The fundamental question: Do you want to save everything or do you want to get rid of everything?
I joined @qlik two years ago and moved to Boston.
Before that I was with spurl.net which was about saving copies of webpages they’d previously visited.
I had also previously invested in kjarninn which is translated as core.
We used to have little data, now we’re with gigantic data and moving to gargantuan data soon.
One of my goals today is to broaden our perspective about what data needs saving.
There’s the Web, the “Deep” Web, then there’s “Other” data which is at the bottom of the pyramid.
I got to see into the process of #panamapapers but I’d like to discuss the consequences from April 3rd.
The amount of meetings were almost more than could have been covered in real time in Iceland.
The #panamapapers were a soap opera, much like US politics.
Looking back at the process is highly interesting, but it’s difficult to look at all the data as they unfoldedd
How can we capture all the media minute by minute as a story unfolds.
You can’t trust that you can go back to a story at a certain time and know that it hasn’t been changed. #1984 #Orwell
There was a relatively pro-HRC piece earlier this year @NYTimes that was changed.
Newsdiffs tracks changes in news over time. The HRC article had changed a lot.
Let’s say you referenced @CNN 10 years ago, likely now, the CMS and the story have both changed.
8 years ago, I asked, wouldn’t we like to have the social media from Iceland’s only Nobel Laureate as a teenager?
What is private/public, ethical/unethical when dealing with data?
Much data is hidden behind passwords or on systems which are not easily accessed from a database perspective.
Most of the content published on Facebook isn’t public. It’s hard to archive in addition to being big.
We as archivists have no claim on the hidden data within Facebook.
The #indieweb could help archivists in the future in accessing more personal data.
Then there’s “other” data: 500 hours of video us uploaded to YouTube per minute.
No organization can go around watching all of this video data. Which parts are newsworthy?
Content could surface much later or could surface through later research.
Hornbjargsviti lighthouse recorded the weather every three hours for years creating lots of data.
And that was just one of hundreds of sites that recorded this type of data in Iceland.
Lots of this data is lost. Much that has been found was by coincidence. It was never thought to archive it.
This type of weather data could be very valuable to researchers later on.
There was also a large archive of Icelandic data that was found.
Showing a timelapse of Icelandic earthquakes https://vimeo.com/24442762
You can watch the magma working it’s way through the ground before it makes it’s way up through the land.
National Geographic featured this video in a documentary.
Sometimes context is important when it comes to data. What is archived today may be more important later.
As the economic crisis unfolded in Greece, it turned out the data that was used to allow them into EU was wrong.
The data was published at the time of the crisis, but there was no record of what the data looked like 5 years earlier.
Only way to recreate the data was to take prior printed sources. This is usu only done in extraordinary cirucumstances.
We captured 150k+ data sets with more than 8 billion “facts” which was just a tiny fraction of what exists.
How can we delve deeper into large data sets, all with different configurations and proprietary systems.
“There’s a story in every piece of data.”
Once a year energy consumption seems to dip because February has fewer days than other months. Plotting it matters.
Year over year comparisons can be difficult because of things like 3 day weekends which shift over time.
Here’s a graph of the population of Iceland. We’ve had our fair share of diseases and volcanic eruptions.
To compare, here’s a graph of the population of sheep. They outnumber us by an order(s) of magnitude.
In the 1780’s there was an event that killed off lots of sheep, so people had the upper hand.
Do we learn more from reading today’s “newspaper” or one from 30, 50, or 100 years ago?
There was a letter to the editor about an eruption and people had to move into the city.
letter: “We can’t have all these people come here, we need to build for our own people first.”
This isn’t too different from our problems today with respect to Syria. In that case, the people actually lived closer.
In the born-digital age, what will the experience look like trying to capture today 40 years hence?
Will it even be possible?
Machine data connections will outnumber “people” data connections by a factor of 10 or more very quickly.
With data, we need to analyze, store, and discard data. How do we decide in a spit-second what to keep & discard?
We’re back to the father-in-law and mother-in-law question: What to get rid of and what to save?
Computing is continually beating human tasks: chess, Go, driving a car. They build on lots more experience based on data
Whoever has the most data on driving cars and landscape will be the ultimate winner in that particular space.
Data is valuable, sometimes we just don’t know which yet.
Hoarding is not a strategy.
You can only guess at what will be important.
“Commercial use in Doubt” The third sub-headline in a newspaper about an early test of television.
There’s more to it than just the web.
Hoarding isn’t a strategy really resonates with librarians, what could that relationship look like?
One should bring in data science, industry may be ahead of libraries.
Cross-disciplinary approaches may be best. How can you get a data scientist to look at your problem? Get their attention?
Peter Arnett:
There’s 60K+ books about the Viet Nam War. How do we learn to integrate what we learn after an event (like that)?
Perspective always comes with time, as additional information arrives.
Scientific papers are archived in a good way, but the underlying data is a problem.
In the future you may have the ability to add supplementary data as a supplement what appears in a book (in a better way)
Archives can give the ability to have much greater depth on many topics.
Are there any centers of excellence on the topics we’re discussing today? This conference may be IT.
We need more people that come from the technical side of things to be watching this online news problem.
Hacks/Hackers is a meetup group that takes place all over the world.
It brings the journalists and computer scientists together regularly for beers. It’s some of the outreach we need.
If you’re not interested in money, this is a good area to explore. 10 minute break.
Don’t forget to leave your thoughts on the questions at the back of the room.
We’re going to get started with our first panel. Why is it important to save online news?
I’m Matt Weber from Rugters University and in communications.
I’ll talk about web archives and news media and how they interact.
I worked at Tribune Corp. for several years and covered politics in DC.
I wanted to study the way in which the news media is changing.
We’re increadingly seeing digital only media with no offline surrogate.
It’s becomign increasingly difficult to do anything but look at it now as it exists.
There was no large scale online repository of online news to do research.
#OccupyWallStreet is one of the first examples of stories that exist online in ocurence and reportage.
There’s a growing need to archive content around local news particularly politics and democracy.
When there is a rich and vibrant local news environment, people are more likely to become engaged.
Local news is one of the least thought about from an archive perspective.
I’m at GWU Librarys in the scholarly technology group.
I’m involved in social feed manager which allows archivists to put together archives from social services.
Kimberly Gross, a faculty member, studies tweets of news outlets and journalists.
We created a prototype tool to allow them to collect data from social media.
Journalists were 2011 primarily using their Twitter presences to direct people to articles rather than for conversation
We collect data of political candidates.
I’m an associate library and representing “Documenting the Now” with WashU, UCRiverside, & UofMd
Documenting the Now revolves around Twitter documentation.
It started with the Ferguson story and documenting media, videos during the protests in the community.
What can we as memory institutions do to capture the data?
We gathered 14million tweets relating to Ferguson within two weeks.
We tried to build a platform that others could use in the future for similar data capture relating to social.
Ethics is important in archiving this type of news data.
Digitally preserving pdfs from news organizations and hyper-local news in Texas.
We’re approaching 5million pages of archived local news.
What is news that needs to be archived, and why?
First, what is news? The definition is unique to each individual.
We need to capture as much of the social news and social representation of news which is fragmented.
It’s an important part of society today.
We no longer produce hard copies like we did a decade ago. We need to capture the online portion.
We’d like to get the perspective of journalists, and don’t have one on the panel today.
We looked at how midterm election candidates used Twitter. Is that news itself? What tools do we use to archive it?
What does it mean to archive news by private citizens?
Twitter was THE place to find information in St. Louis during the Ferguson protests.
Local news outlets weren’t as good as Twitter during the protests.
I could hear the protest from 5 blocks away and only found news about it on Twitter.
The story was bing covered very differently on Twitter than the local (mainstream) news.
Alternate voices in the mix were very interesting and important.
Twitter was in the moment and wasn’t being edited and causing a delay.
What can we learn from this massive number of Ferguson tweets.
It gives us information about organizing, and what language was being used.
I think about the archival portion of this question. By whom does it need to be archived?
What do we archive next?
How are we representing the current population now?
Who is going to take on the burden of archiving? Should it be corporate? Cultural memory institution?
Someone needs to currate it, who does that?
our next question: What do you view as primary barriers to news archiving?
How do we organize and staff? There’s no shortage of work.
Tools and software can help the process, but libraries are usually staffed very thinly.
No single institution can do this type of work alone. Collaboration is important.
Two barriers we deal with: terms of service are an issue with archiving. We don’t own it, but can use it.
Libraries want to own the data in perpetuity. We don’t own our data.
There’s a disconnect in some of the business models for commercialization and archiving.
Issues with accessing data.
People were worried about becoming targets or losing jobs because of participation.
What is role of ethics of archiving this type of data? Allowing opting out?
What about redacting portions? anonymizing the contributions?
Publishers have a responsibility for archiving their product. Permission from publishers can be difficult.
We have a lot of underserved communities. What do we do with comments on stories?
Corporations may not continue to exist in the future and data will be lost.
There’s a balance to be struck between the business side and the public good.
It’s hard to convince for profit about the value of archiving for the social good.
Next Q: What opportunities have revealed themselves in preserving news?
Finding commonalities and differences in projects is important.
What does it mean to us to archive different media types? (think diversity)
What’s happening in my community? in the nation? across the world?
The long-history in our archives will help us learn about each other.
We can only do so much with the resources we have.
We’ve worked on a cyber cemetery product in the past.
Someone else can use the tools we create within their initiatives.
repeating ?: What are issues in archiving longerform video data with regard to stories on Periscope?
How do you channel the energy around archiving news archiving?
Research in the area is all so new.
Does anyone have any experience with legal wrangling with social services?
The ACLU is waging a lawsuit against Twitter about archived tweets.
Outreach to community papers is very rhizomic.
How do you take local examples and make them a national model?
We’re teenagers now in the evolution of what we’re doing.
Peter Arnett just said “This is all ore interesting than I thought it would be.”
Next Presentation: NewsScape: preserving TV news
I’ll be talking about the NewsScape project of Francis Steen, Director, Communication Studies Archive
I’m leading the archiving of the analog portion of the collection.
The oldest of our collection dates from the 1950’s. We’ve hosted them on YouTube which has created some traction.
Commenters have been an issue with posting to YouTube as well as copyright.
NewsScape is the largest collecction of TV news and public affairs programs (local & national)
Prior to 2006, we don’t know what we’ve got.
Paul said “Ill record everytihing I can and someone in the future can deal with it.”
We have 50K hours of Betamax.
VHS are actually most threatened, despite being newest tapes.
Our budget was seriously strapped.
Maintaining closed captioning is important to our archiving efforts.
We’ve done 36k hours of encoding this year.
We use a layer of dead VCR’s over our good VCR’s to prevent RF interference and audio buzzing. 🙂
Post-2006 We’re now doing straight to digital
Preservation is the first step, but we need to be more than the world’s best DVR.
Searching the news is important too.
Showing a data visualization of news analysis with regard to the Heathcare Reform movement.
We’re doing facial analysis as well.
We have interactive tools at viz2016.com.
We’ve tracked how often candidates have smiled in election 2016. Hillary > Trump
We want to share details within our collection, but don’t have tools yet.
Having a good VCR repairman has helped us a lot.
Breaking for lunch…
Talk “Born-digital news preservation in perspective”
There’s a shared consensus that preserving scholarly publications is important.
While delivery models have shifted, there must be some fall back to allow content to survive publisher failure.
Preservation was a joint investment between memory institutions and publishers.
Keepers register their coverage of journals for redundancy.
In studying coverage, we’ve discovered Elsevier is REALLY well covered, but they’re not what we’re worried about.
It’s the small journals as edge cases that really need more coverage.
Smaller journals don’t have resources to get into the keeper services and it’s more expensive.
Many Open Access Journals are passion projects and heavily underfunded and they are poorly covered.
Being mindful of these business dynamics is key when thinking about archiving news.
There are a handful of large news outlets that are “too big to fail.”
There are huge numbers of small outlets like subject verticals, foreign diasporas, etc. that need to be watched
Different strategies should be used for different outlets.
The material on lots of links (as sources) disappears after a short period of time.
While Archive.org is a great resource, it can’t do everything.
Preserving underlying evidence is really important.
How we deal with massive databases and queries against them are a difficult problem.
I’m not aware of studies of link rot with relationship to online news.
Who steps up to preserve major data dumps like Snowden, PanamaPapers, or email breaches?
Social media is a collection of observations and small facts without necessarily being journalism.
Journalism is a deliberate act and is meant to be public while social media is not.
We need to come up with a consensus about what parts of social media should be preserved as news..
News does often delve into social media as part of its evidence base now.
Responsible journalism should include archival storage, but it doesn’t yet.
Under current law, we can’t protect a lot of this material without the permission of the creator(s).
The Library of Congress can demand deposit, but doesn’t.
With funding issues, I’m not wild about the Library of Congress being the only entity [for storage.]
In the UK, there are multiple repositories.
testing to see if I’m still live
What happens if you livetweet too much in one day.
Homebrew Website Club — Los Angeles
We had our largest RSVP list to date, though some had last minute issues pop up and one sadly had trouble finding the location (likely due to a Google map glitch).
Angelo and Chris met before the quiet writing hour to discuss some general planning for future meetings as well as the upcoming IndieWebCamp in LA in November. Details and help for arrangements for out of town attendees should be posted shortly.
Notes from the “broadcast” portion of the meetup
Chris Aldrich (co-organizer)
- Still working on a workflow for owning all of his reading related data, particularly with respect to POSSE to GoodReads.com
- Registered for the upcoming Dodging the Memory Hole 2016: Saving Online News on October 13/14 at UCLA which has the flavor of IndieWeb as well as the Decentralized Web movements.
Angelo Gladding (co-organizer)
- Work is proceeding nicely on the overall build of Canopy
- Discussed an issue with expanding data for social network in relation to events and potentially expanding contacts based on event attendees
Srikanth Bangalore (our host at Yahoo!)
- Discussed some of his background in coding and work with Drupal and WordPress.
- His personal site is https://srib.us/
Notes from the “working” portion of the meetup
We sketched out a way to help Srikanth IndieWeb-ify not only his own site, but to potentially help do so for Katie Couric’s Yahoo! based news site along with the pros/cons of workflows for journalists in general. We also considered some potential pathways for potentially bolting on webmentions for websites (like Tumblr/WordPress) which utilize Disqus for their commenting system. We worked through the details of webmentions and a bit of micropub for his benefit.
Srikanth discussed some of the history and philosophy behind why Tumblr didn’t have a more “traditional” native commenting system. The point was generally to socially discourage negativity, spamming, and abuse by forcing people to post their comments front and center on their own site (and not just in the “comments” of the receiving site) thereby making the negativity be front and center and redound to their own reputation rather than just the receiving page of the target. Most social media related sites hide (or make hard to search/find) the abusive nature of most users, while allowing them to appear better/nicer on their easier-to-find public facing persona.
Before closing out the meeting officially, we stopped by the front lobby where two wonderful and personable security guards (one a budding photographer) not only helped us with a group photo, but managed to help us escape the parking lot!
I think it’s agreed we all had a great time and look forward to more progress on projects, more good discussion, and more interested folks at the next meeting. Srikanth was so amazed at some of the concepts, it’s possible that all of Yahoo! may be IndieWeb-ified by the end of the week. 🙂
We hope you’ll join us next month on 10/05! (Details forthcoming…)
Live Tweets Archive
Ever with grand aspirations to do as good a job as the illustrious Kevin Marks, we tried some livetweeting with Noterlive. Alas the discussion quickly became so consuming that the effort was abandoned in lieu of both passion and fun. Hopefully some of the salient points were captured above in better form anyway.
I only use @drupal when I want to make money. (Replying to why his personal site was on @wordpress.) #
(This CMS comment may have been the biggest laugh of the night, though the tone captured here (and the lack of context), doesn’t do the comment any justice at all.)
I’m a hobby-ist programmer, but I also write code to make money. #
I’m into python which is my language of choice. #
Thanks again @themarketeng for hosting Homebrew Website Club at Yahoo tonight! We really appreciate the hospitality. #
As of 9/13/16 I’m beginning to own all of my reading data into & from @GoodReads 📚 #IndieWeb
Attending WordCamp Los Angeles

Instagram filter used: Clarendon
Photo taken at: California State University, Los Angeles
My first pull request
Oddly, I had seen the VERY same post/repo a few weeks back and meant to add a readme too! (You’ll notice I got too wrapped up in reading through the code and creating some usability issues after installing the plugin instead.)
Given that you’ve got your own domain and website (and playing in ed/tech like many of us are), and you’re syndicating your blog posts out to Medium for additional reach, I feel compelled to mention some interesting web tech and philosophy in the #IndieWeb movement. You can find some great resources and tools at their website.
In particular, you might take a look at their WordPress pages which includes some plugins and resources you’ll be sure to appreciate. One of their sets of resources is allowing you to not only syndicate your WP posts (what they call POSSE), but by using the new W3C webmention spec, you can connect many of your social media resources to brid.gy and have services like twitter, facebook, G+, instagram and others send the comments and likes on your posts there back to your blog directly, thereby allowing you to own all of your data (as well as the commentary that occurs elsewhere). I can see a lot of use for education in some of the infrastructure they’re building and aggregating there. (If you’re familiar with Known, they bake a lot of Indieweb goodness into their system from the start, but there’s no reason you shouldn’t have it for your WordPress site as well.)
If you need any help/guidance in following/installing anything there, I’m happy to help.
Congratulations again. Keep on pullin’!
Instagram Single Photo Bookmarklet
The following javascript-based bookmarklet is courtesy of Tantek Çelik as an Indieweb tool he built at IndieWebCamp NYC2:
If you view a single photo permalink page, the following bookmarklet will extract the permalink (trimmed), photo jpg URL, and photo caption and copy them into a text note, suitable for posting as a photo that’s auto-linked:
javascript:n=document.images.length-1;s=document.images[n].src;s=s.split('?');s=s[0];u=document.location.toString().substring(0,39);prompt('Choose "Copy ⌘C" to copy photo post:',s+' '+u+'\n'+document.images[n].alt.toString().replace(RegExp(/\.\n(\.\n)+/),'\n'))Any questions, let me know! –Tantek
If you want an easy drag-and-drop version, just drag the button below into your browser’s bookmark bar.
Editor’s note: Though we’ll try to keep the code in this bookmarklet updated, the most recent version can be found on the Indieweb wiki thought the link above.
Representing Indieweb at DrupalCampLA
Reply to Scott Kingery about Wallabag and Reading
I also feel that one needs the right tool for the right job. While I like WordPress for many things, it’s not always the best thing to solve the problem. In some cases Drupal or even lowly Wix may be the best solution. The key is to find the right balance of time, knowledge, capability and other variables to find the optimal solution for the moment, while maintaining the ability to change in the future if necessary. By a similar analogy there are hundreds of programming languages and all have their pros and cons. Often the one you know is better than nothing, but if you heard about one that did everything better and faster, it would be a shame not to check it out.
This said, I often prefer to go with specialist software, though I do usually have a few requirements which overlap or align with Indieweb principles, including, but not limited to:
- It should be open, so I can modify/change/share it with others
- I should be able to own all the related/resultant data
- I should be able to self-host it (if I want)
- It should fit into my workflow and solve a problem I have while not creating too many new problems
In this case, I suspect that Wallabag is far better than anything I might have time to build and maintain myself. If there are bits of functionality that are missing, I can potentially request them or build/add them myself and contribute back to the larger good.
Naturally I do also worry about usability and maintenance as well, so if the general workflow and overhead doesn’t dovetail in with my other use cases, all bets may be off. If large pieces of my data, functionality, and workflow are housed in WordPress, for example, and something like this isn’t easily integrateable or very difficult to keep updated and maintain, then I’ll pass and look for (or build) a different solution. (Not every tool is right for just any job.) On larger projects like this, there’s also the happy serendipity that they’re big enough that WordPress (Drupal, Jekyll, other) developers can better shoehorn the functionality in to a bigger project or create a simple API thereby making the whole more valuable than the sum of the parts.
In this particular situation, it appears to be a 1-1 replacement for a closed silo version of something I’ve been using regularly, but which provides more of the benefits above than the silo does, so it seems like a no-brainer to switch.
To reply to this comment preferably do so on the original at: