Personal and private Web archives are proliferating due to the increase in the tools to create them and the realization that Internet Archive and other public Web archives are unable to capture personalized (e.g., Facebook) and private (e.g., banking) Web pages. We introduce a framework to mitigate issues of aggregation in private, personal, and public Web archives without compromising potential sensitive information contained in private captures. We amend Memento syntax and semantics to allow TimeMap enrichment to account for additional attributes to be expressed inclusive of the requirements for dereferencing private Web archive captures. We provide a method to involve the user further in the negotiation of archival captures in dimensions beyond time. We introduce a model for archival querying precedence and short-circuiting, as needed when aggregating private and personal Web archive captures with those from public Web archives through Memento. Negotiation of this sort is novel to Web archiving and allows for the more seamless aggregation of various types of Web archives to convey a more accurate picture of the past Web.
These 16,000 BBC Sound Effects are made available by the BBC in WAV format to download for use under the terms of the RemArc Licence. The Sound Effects are BBC copyright, but they may be used for personal, educational or research purposes, as detailed in the license.
Access dataset metadata by visiting our dedicated LOD site. If you have any queries regarding usage, please contact jake.berger at bbc.co.uk
Because everyone should be able to hear what a Creed tape-printing telegraph sounds like when it’s operating (c.1928-1952).
h/t to @BBCArchive
Attention all #soundeffects enthusiasts!
Over 16,000 classic BBC Archive sound effects and field recordings, from air raids to zebras, are available on the BBC Sound Effects Beta: https://t.co/pO6Ke42yz8
FREE to listen or download and reuse for non-commercial purposes. pic.twitter.com/SWg17zDg6S
— BBC Archive (@BBCArchive) April 17, 2018
I’ve just spent an inordinate amount of time creating an archive of all my past online writing work, in particular of the tech blog I founded ReadWriteWeb. I thought I’d outline my reasons for doing this, and why I ended up relying heavily on the Internet Archive instead of the original website sources.
Journalists, take note of how Richard MacManus created an online archive of his writing work!
I’m sure it took a tremendous amount of work given his long history of writing, but he’s now got a great archive as well as a nearly complete online portfolio of his work. If you haven’t done this or have just started out, here are some potentially useful resources to guide your thoughts.
I’m curious how others are doing this type of online archive. Feel free to share your methods.
Linkrot and the lack of permanence on the web is a recurring theme for this blog. In the final days as App.net was winding down, I wanted to put my money where my mouth was. I spun up a couple new servers and wrote a set of scripts to essentially download every post on App.net. It feels like a fragile archive, put together hastily, but I believe it’s mostly complete. I’ve also downloaded thumbnail versions of some of the public photos hosted on App.net.
In a project which I started just before IndieWebCamp LA in November, I’ve moved a big step closer to perfecting my “Read” posts!
Thanks in large part to WordPress, PressForward, friends and help on the IndieWeb site too numerous to count, and a little bit of elbow grease, I can now receive and read RSS feeds in my own website UI (farewell Feedly), bookmark posts I want to read later (so long Pocket, Instagram, Delicious and Pinboard), mark them as read when done, archive them on my site (and hopefully on the Internet Archive as well) for future reference, highlight and annotate them (I still love you hypothes.is, but…), and even syndicate (POSSE) them automatically (with emoji) to silos like Facebook, Twitter (with Twitter Cards), Tumblr, Flipboard, LinkedIn, Pinterest, StumbleUpon, Reddit, and Delicious among others.
Syndicated copies in the silos when clicked will ping my site for a second and then automatically redirect to the canonical URL for the original content to give the credit to the originating author/site. And best of all, I can still receive comments, likes, and other responses from the siloed copies via webmention to stay in the loop on the conversations they generate without leaving my site.
Here’s an example of a syndicated post to Twitter:
👓 Physicists Uncover Geometric ‘Theory Space’ | Quanta Magazine https://t.co/HuKg1d4a80
— ChrisAldrich (@ChrisAldrich) February 23, 2017
Tweetstorms and Journalism
Tweetstorms have been getting a horrific reputation lately.  But used properly, they can sometimes have an excellent and beneficial effect. In fact, recently I’ve seen some journalists using it for both marketing and on the spot analysis in their areas of expertise. Even today Aram Zucker-Scharff, a journalism critic in his own tweetstorm , suggests that this UI form may have an interesting use case in relation to news outlets like CNN which make multiple changes to a news story which lives at one canonical (and often not quickly enough archived) URL, but which is unlikely to be visited multiple times:
Why not publish a sequence of small stories that connect together rather than one big one on the same URL that keeps changing?
Why not publish a sequence of small stories that connect together rather t
— Aram Zucker-Scharff (@Chronotope) February 10, 2017
A newsstorm-type user experience could better lay out the ebb and flow of a particular story over time and prevent the loss of data, context, and even timeframe that otherwise occurs on news websites that regularly update content on the same URL. (Though there are a few tools in the genre like Memento which could potentially be useful.)
It’s possible that tweetstorms could even be useful for world leaders who lack the focus to read full sentences formed into paragraphs, and possibly even multiple paragraphs that run long enough to comprise articles, research documents, or even books. I’m not holding my breath though.
Technical problems for tweetstorms
But the big problem with tweetstorms–even when they’re done well and without manthreading–is actually publishing them quickly, rapidly, and without letting any though process between one tweet and the next.
Noter Live–the solution!
Last week this problem just disappeared: I think Noter Live has just become the best-in-class tool for tweetstorms.
Noter Live was already the go-to tool for live tweeting at conferences, symposia, workshops, political debates, public fora, and even live cultural events like the Superbowl or the Academy Awards. But with a few simple tweaks Kevin Marks, the king of covering conferences live on Twitter, has just updated it in a way that allows one to strip off the name of the speaker so that an individual can type in their own stream of consciousness simply and easily.
But wait! It has an all-important added bonus feature in addition to the fact that it automatically creates the requisite linked string of tweets for easier continuous threaded reading on Twitter…
Bonus tip, after you’ve saved the entire stream on your own site, why not tweet out the URL permalink to the post as the last in the series? It’ll probably be a nice tweak on the nose that those who just read through a string of 66 tweets over the span of 45 minutes were waiting for!
So the next time you’re at a conference or just in the mood to rant, remember Noter Live is waiting for you.
Aside: I really wonder how it is that Twitter hasn’t created the ability (UX/UI) to easily embed an entire tweetstorm in one click? It would be a great boon to online magazines and newspapers who more frequently cut and paste tweets from them to build articles around. Instead most sites just do an atrocious job of cutting and pasting dozens to hundreds of tweets in a long line to try to tell these stories.
Once lost, this eight minute, very damaged, but very delightful silent version of Alice in Wonderland was restored several years ago by the British Film Institute. It is the first film adaptation of the 1865 Lewis Carroll classic. And at the time, the original length of 12 minutes (eight are all that’s left) made it the longest film coming out of the nascent British film industry.
GitHub have published some guidance on persistence and archiving of repositories for academics https://help.github.com/articles/about-archiving-content-and-data-on-github/ #openscience
The crowd from Dodging the Memory Hole are sure to find this interesting!Syndicated copies to:
Posters from the rally in Boston will be cataloged and archived.
Signs line the fence surrounding Boston Common after the Boston Women’s March for America on Saturday. Some of those signs could end up in an archive at Northeastern U.
The signs were pink, blue, black, white. Some were hoisted with wooden sticks, and others were held in protesters’ hands. A few sparkled with glitter, and some had original designs, created on computers with the help of a few internet memes.
Still, at the Boston Women’s March for America on Saturday, hundreds of the signs criticizing President Trump’s campaign promises and administrative agenda ended up wrapped around the fence near Boston Common, laid down like a carpet covering the sidewalk. Continue reading “In Discarded Women’s March Signs, Professors Saw a Chance to Save History | The Chronicle of Higher Education”
While I was updating Indieweb/site-deaths, I was reminded to download my . It sold to Twitter almost two years ago this week and has been largely inactive since.
It includes some of the earliest photos I ever took and posted online via mobile phone. Looking at the quality, it’s interesting to see how far we’ve come. It’s also obvious why photo filters became so popular.
Live Tweeting and Twitter Lists
While attending the upcoming conference Dodging the Memory Hole 2016: Saving Online News later this week, I’ll make an attempt to live Tweet as much as possible. (If you’re following me on Twitter on Thursday and Friday and find me too noisy, try using QuietTime.xyz to mute me on Twitter temporarily.) I’ll be using Kevin Marks‘ excellent Noter Live web app to both send out the tweets as well as to store and archive them here on this site thereafter (kind of like my own version of Storify.)
In getting ramped up to live Tweet it, it helps significantly to have a pre-existing list of attendees (and remote participants) talking about #DtMH2016 on Twitter, so I started creating a Twitter list by hand. I realized that it would be nice to have a little bot to catch others as the week progresses. Ever lazy, I turned to IFTTT.com to see if something already existed, and sure enough there’s a Twitter search with a trigger that will allow one to add people who mention a particular hashtag to a Twitter list automatically.
Here’s the resultant list, which should grow as the event unfolds throughout the week:
🔖 People on Twitter talking about #DtMH2016
Feel free to follow or subscribe to the list as necessary. Hopefully this will make attending the conference more fruitful for those there live as well as remote.
Not on the list? Just tweet a (non-private) message with the conference hashtag: #DTMH2016 and you should be added to the list shortly.
IFTTT Recipe for Creating Twitter Lists of Conference Attendees
For those interested in creating their own Twitter lists for future conferences (and honestly the hosts of all conferences should do this as they set up their conference hashtag and announce the conference), below is a link to the ifttt.com recipe I created for this, but which can be modified for use by others.
Naturally, it would also be nice if, as people registered for conferences, they were asked for their Twitter handles and websites so that the information could be used to create such online lists to help create longer lasting relationships both during the event and afterwards as well. (Naturally providing these details should be optional so that people who wish to maintain their privacy could do so.)Syndicated copies to: