What particularly strikes me is how many of the philosophies of the IndieWeb movement and tools developed by it are applicable to some of the problems that online news faces. I suspect that if more journalists were practicing members of the IndieWeb and used their sites not only for collecting and storing the underlying data upon which they base their stories, but to publish them as well, then some of the (future) archival process may be easier to accomplish. I’ve got so many disparate thoughts running around my mind after the first day that it’ll take a bit of time to process before I write out some more detailed thoughts.
Twitter List for the Conference
As a reminder to those attending, I’ve accumulated a list of everyone who’s tweeted with the hashtag #DtMH2016, so that attendees can more easily follow each other as well as communicate online following our few days together in Los Angeles. Twitter also allows subscribing to entire lists too if that’s something in which people have interest.
Archiving the day
It seems only fitting that an attendee of a conference about saving and archiving digital news, would make a reasonable attempt to archive some of his experience right?! Toward that end, below is an archive of my tweetstorm during the day marked up with microformats and including hovercards for the speakers with appropriate available metadata. For those interested, I used a fantastic web app called Noter Live to capture, tweet, and more easily archive the stream.
Note that in many cases my tweets don’t reflect direct quotes of the attributed speaker, but are often slightly modified for clarity and length for posting to Twitter. I have made a reasonable attempt in all cases to capture the overall sentiment of individual statements while using as many original words of the participant as possible. Typically, for speed, there wasn’t much editing of these notes. I’m also attaching .m4a audio files of most of the audio for the day (apologies for shaky quality as it’s unedited) which can be used for more direct attribution if desired. The Reynolds Journalism Institute videotaped the entire day and livestreamed it. Presumably they will release the video on their website for a more immersive experience.
If you prefer to read the stream of notes in the original Twitter format, so that you can like/retweet/comment on individual pieces, this link should give you the entire stream. Naturally, comments are also welcome below.
Audio Files
Below are the audio files for several sessions held throughout the day.
Greetings and Keynote
Greetings: Edward McCain, digital curator of journalism, Donald W. Reynolds Journalism Institute (RJI) and University of Missouri Libraries and Ginny Steel, university librarian, UCLA
Keynote: Digital salvage operations — what’s worth saving? given by Hjalmar Gislason, vice president of data, Qlik
Why save online news? and NewsScape
Panel: “Why save online news?” featuring Chris Freeland, Washington University; Matt Weber, Ph.D., Rutgers, The State University of New Jersey; Laura Wrubel, The George Washington University; moderator Ana Krahmer, Ph.D., University of North Texas
Presentation: “NewsScape: preserving TV news” given by Tim Groeling, Ph.D., UCLA Communication Studies Department
Born-digital news preservation in perspective
Speaker: Clifford Lynch, Ph.D., executive director, Coalition for Networked Information on “Born-digital news preservation in perspective”
Live Tweet Archive
Getting Noter Live fired up for Dodging the Memory Hole 2016: Saving Online News https://www.rjionline.org/dtmh2016
I’m glad I’m not at NBC trying to figure out the details for releasing THE APPRENTICE tapes.
Let’s thank @UCLA and the library for hosting us all.
While you’re here, don’t forget to vote/provide feedback throughout the day for IMLS
Someone once pulled up behind me and said “Hi Tiiiigeeerrr!” #Mizzou
A server at the Missourian crashed as the system was obsolete and running on baling wire. We lost 15 years of archives
The dean & head of Libraries created a position to save born digital news.
We’d like to help define stake-holder roles in relation to the problem.
Newspaper is really an outmoded term now.
I’d like to celebrate that we have 14 student scholars here today.
We’d like to have you identify specific projects that we can take to funding sources to begin work after the conference
We’ll be going to our first speaker who will be introduced by Martin Klein from Los Alamos.
Hjalmar Gislason is a self-described digital nerd. He’s the Vice President of Data.
I wonder how one becomes the President of Data?
My Icelandic name may be the most complicated part of my talk this morning.
Speaking on Digital Salvage Operations: What’s worth Saving”
My father in law accidentally threw away my wife’s favorite stuffed animal. #DeafTeddy
Some people just throw everything away because they’re not being used. Others keep everything and don’t throw it away.
The fundamental question: Do you want to save everything or do you want to get rid of everything?
I joined @qlik two years ago and moved to Boston.
Before that I was with spurl.net which was about saving copies of webpages they’d previously visited.
I had also previously invested in kjarninn which is translated as core.
We used to have little data, now we’re with gigantic data and moving to gargantuan data soon.
One of my goals today is to broaden our perspective about what data needs saving.
There’s the Web, the “Deep” Web, then there’s “Other” data which is at the bottom of the pyramid.
I got to see into the process of #panamapapers but I’d like to discuss the consequences from April 3rd.
The amount of meetings were almost more than could have been covered in real time in Iceland.
The #panamapapers were a soap opera, much like US politics.
Looking back at the process is highly interesting, but it’s difficult to look at all the data as they unfoldedd
How can we capture all the media minute by minute as a story unfolds.
You can’t trust that you can go back to a story at a certain time and know that it hasn’t been changed. #1984 #Orwell
There was a relatively pro-HRC piece earlier this year @NYTimes that was changed.
Newsdiffs tracks changes in news over time. The HRC article had changed a lot.
Let’s say you referenced @CNN 10 years ago, likely now, the CMS and the story have both changed.
8 years ago, I asked, wouldn’t we like to have the social media from Iceland’s only Nobel Laureate as a teenager?
What is private/public, ethical/unethical when dealing with data?
Much data is hidden behind passwords or on systems which are not easily accessed from a database perspective.
Most of the content published on Facebook isn’t public. It’s hard to archive in addition to being big.
We as archivists have no claim on the hidden data within Facebook.
The #indieweb could help archivists in the future in accessing more personal data.
Then there’s “other” data: 500 hours of video us uploaded to YouTube per minute.
No organization can go around watching all of this video data. Which parts are newsworthy?
Content could surface much later or could surface through later research.
Hornbjargsviti lighthouse recorded the weather every three hours for years creating lots of data.
And that was just one of hundreds of sites that recorded this type of data in Iceland.
Lots of this data is lost. Much that has been found was by coincidence. It was never thought to archive it.
This type of weather data could be very valuable to researchers later on.
There was also a large archive of Icelandic data that was found.
Showing a timelapse of Icelandic earthquakes https://vimeo.com/24442762
You can watch the magma working it’s way through the ground before it makes it’s way up through the land.
National Geographic featured this video in a documentary.
Sometimes context is important when it comes to data. What is archived today may be more important later.
As the economic crisis unfolded in Greece, it turned out the data that was used to allow them into EU was wrong.
The data was published at the time of the crisis, but there was no record of what the data looked like 5 years earlier.
Only way to recreate the data was to take prior printed sources. This is usu only done in extraordinary cirucumstances.
We captured 150k+ data sets with more than 8 billion “facts” which was just a tiny fraction of what exists.
How can we delve deeper into large data sets, all with different configurations and proprietary systems.
“There’s a story in every piece of data.”
Once a year energy consumption seems to dip because February has fewer days than other months. Plotting it matters.
Year over year comparisons can be difficult because of things like 3 day weekends which shift over time.
Here’s a graph of the population of Iceland. We’ve had our fair share of diseases and volcanic eruptions.
To compare, here’s a graph of the population of sheep. They outnumber us by an order(s) of magnitude.
In the 1780’s there was an event that killed off lots of sheep, so people had the upper hand.
Do we learn more from reading today’s “newspaper” or one from 30, 50, or 100 years ago?
There was a letter to the editor about an eruption and people had to move into the city.
letter: “We can’t have all these people come here, we need to build for our own people first.”
This isn’t too different from our problems today with respect to Syria. In that case, the people actually lived closer.
In the born-digital age, what will the experience look like trying to capture today 40 years hence?
Will it even be possible?
Machine data connections will outnumber “people” data connections by a factor of 10 or more very quickly.
With data, we need to analyze, store, and discard data. How do we decide in a spit-second what to keep & discard?
We’re back to the father-in-law and mother-in-law question: What to get rid of and what to save?
Computing is continually beating human tasks: chess, Go, driving a car. They build on lots more experience based on data
Whoever has the most data on driving cars and landscape will be the ultimate winner in that particular space.
Data is valuable, sometimes we just don’t know which yet.
Hoarding is not a strategy.
You can only guess at what will be important.
“Commercial use in Doubt” The third sub-headline in a newspaper about an early test of television.
There’s more to it than just the web.
Hoarding isn’t a strategy really resonates with librarians, what could that relationship look like?
One should bring in data science, industry may be ahead of libraries.
Cross-disciplinary approaches may be best. How can you get a data scientist to look at your problem? Get their attention?
Peter Arnett:
There’s 60K+ books about the Viet Nam War. How do we learn to integrate what we learn after an event (like that)?
Perspective always comes with time, as additional information arrives.
Scientific papers are archived in a good way, but the underlying data is a problem.
In the future you may have the ability to add supplementary data as a supplement what appears in a book (in a better way)
Archives can give the ability to have much greater depth on many topics.
Are there any centers of excellence on the topics we’re discussing today? This conference may be IT.
We need more people that come from the technical side of things to be watching this online news problem.
Hacks/Hackers is a meetup group that takes place all over the world.
It brings the journalists and computer scientists together regularly for beers. It’s some of the outreach we need.
If you’re not interested in money, this is a good area to explore. 10 minute break.
Don’t forget to leave your thoughts on the questions at the back of the room.
We’re going to get started with our first panel. Why is it important to save online news?
I’m Matt Weber from Rugters University and in communications.
I’ll talk about web archives and news media and how they interact.
I worked at Tribune Corp. for several years and covered politics in DC.
I wanted to study the way in which the news media is changing.
We’re increadingly seeing digital only media with no offline surrogate.
It’s becomign increasingly difficult to do anything but look at it now as it exists.
There was no large scale online repository of online news to do research.
#OccupyWallStreet is one of the first examples of stories that exist online in ocurence and reportage.
There’s a growing need to archive content around local news particularly politics and democracy.
When there is a rich and vibrant local news environment, people are more likely to become engaged.
Local news is one of the least thought about from an archive perspective.
I’m at GWU Librarys in the scholarly technology group.
I’m involved in social feed manager which allows archivists to put together archives from social services.
Kimberly Gross, a faculty member, studies tweets of news outlets and journalists.
We created a prototype tool to allow them to collect data from social media.
Journalists were 2011 primarily using their Twitter presences to direct people to articles rather than for conversation
We collect data of political candidates.
I’m an associate library and representing “Documenting the Now” with WashU, UCRiverside, & UofMd
Documenting the Now revolves around Twitter documentation.
It started with the Ferguson story and documenting media, videos during the protests in the community.
What can we as memory institutions do to capture the data?
We gathered 14million tweets relating to Ferguson within two weeks.
We tried to build a platform that others could use in the future for similar data capture relating to social.
Ethics is important in archiving this type of news data.
Digitally preserving pdfs from news organizations and hyper-local news in Texas.
We’re approaching 5million pages of archived local news.
What is news that needs to be archived, and why?
First, what is news? The definition is unique to each individual.
We need to capture as much of the social news and social representation of news which is fragmented.
It’s an important part of society today.
We no longer produce hard copies like we did a decade ago. We need to capture the online portion.
We’d like to get the perspective of journalists, and don’t have one on the panel today.
We looked at how midterm election candidates used Twitter. Is that news itself? What tools do we use to archive it?
What does it mean to archive news by private citizens?
Twitter was THE place to find information in St. Louis during the Ferguson protests.
Local news outlets weren’t as good as Twitter during the protests.
I could hear the protest from 5 blocks away and only found news about it on Twitter.
The story was bing covered very differently on Twitter than the local (mainstream) news.
Alternate voices in the mix were very interesting and important.
Twitter was in the moment and wasn’t being edited and causing a delay.
What can we learn from this massive number of Ferguson tweets.
It gives us information about organizing, and what language was being used.
I think about the archival portion of this question. By whom does it need to be archived?
What do we archive next?
How are we representing the current population now?
Who is going to take on the burden of archiving? Should it be corporate? Cultural memory institution?
Someone needs to currate it, who does that?
our next question: What do you view as primary barriers to news archiving?
How do we organize and staff? There’s no shortage of work.
Tools and software can help the process, but libraries are usually staffed very thinly.
No single institution can do this type of work alone. Collaboration is important.
Two barriers we deal with: terms of service are an issue with archiving. We don’t own it, but can use it.
Libraries want to own the data in perpetuity. We don’t own our data.
There’s a disconnect in some of the business models for commercialization and archiving.
Issues with accessing data.
People were worried about becoming targets or losing jobs because of participation.
What is role of ethics of archiving this type of data? Allowing opting out?
What about redacting portions? anonymizing the contributions?
Publishers have a responsibility for archiving their product. Permission from publishers can be difficult.
We have a lot of underserved communities. What do we do with comments on stories?
Corporations may not continue to exist in the future and data will be lost.
There’s a balance to be struck between the business side and the public good.
It’s hard to convince for profit about the value of archiving for the social good.
Next Q: What opportunities have revealed themselves in preserving news?
Finding commonalities and differences in projects is important.
What does it mean to us to archive different media types? (think diversity)
What’s happening in my community? in the nation? across the world?
The long-history in our archives will help us learn about each other.
We can only do so much with the resources we have.
We’ve worked on a cyber cemetery product in the past.
Someone else can use the tools we create within their initiatives.
repeating ?: What are issues in archiving longerform video data with regard to stories on Periscope?
How do you channel the energy around archiving news archiving?
Research in the area is all so new.
Does anyone have any experience with legal wrangling with social services?
The ACLU is waging a lawsuit against Twitter about archived tweets.
Outreach to community papers is very rhizomic.
How do you take local examples and make them a national model?
We’re teenagers now in the evolution of what we’re doing.
Peter Arnett just said “This is all ore interesting than I thought it would be.”
Next Presentation: NewsScape: preserving TV news
I’ll be talking about the NewsScape project of Francis Steen, Director, Communication Studies Archive
I’m leading the archiving of the analog portion of the collection.
The oldest of our collection dates from the 1950’s. We’ve hosted them on YouTube which has created some traction.
Commenters have been an issue with posting to YouTube as well as copyright.
NewsScape is the largest collecction of TV news and public affairs programs (local & national)
Prior to 2006, we don’t know what we’ve got.
Paul said “Ill record everytihing I can and someone in the future can deal with it.”
We have 50K hours of Betamax.
VHS are actually most threatened, despite being newest tapes.
Our budget was seriously strapped.
Maintaining closed captioning is important to our archiving efforts.
We’ve done 36k hours of encoding this year.
We use a layer of dead VCR’s over our good VCR’s to prevent RF interference and audio buzzing. 🙂
Post-2006 We’re now doing straight to digital
Preservation is the first step, but we need to be more than the world’s best DVR.
Searching the news is important too.
Showing a data visualization of news analysis with regard to the Heathcare Reform movement.
We’re doing facial analysis as well.
We have interactive tools at viz2016.com.
We’ve tracked how often candidates have smiled in election 2016. Hillary > Trump
We want to share details within our collection, but don’t have tools yet.
Having a good VCR repairman has helped us a lot.
Breaking for lunch…
Talk “Born-digital news preservation in perspective”
There’s a shared consensus that preserving scholarly publications is important.
While delivery models have shifted, there must be some fall back to allow content to survive publisher failure.
Preservation was a joint investment between memory institutions and publishers.
Keepers register their coverage of journals for redundancy.
In studying coverage, we’ve discovered Elsevier is REALLY well covered, but they’re not what we’re worried about.
It’s the small journals as edge cases that really need more coverage.
Smaller journals don’t have resources to get into the keeper services and it’s more expensive.
Many Open Access Journals are passion projects and heavily underfunded and they are poorly covered.
Being mindful of these business dynamics is key when thinking about archiving news.
There are a handful of large news outlets that are “too big to fail.”
There are huge numbers of small outlets like subject verticals, foreign diasporas, etc. that need to be watched
Different strategies should be used for different outlets.
The material on lots of links (as sources) disappears after a short period of time.
While Archive.org is a great resource, it can’t do everything.
Preserving underlying evidence is really important.
How we deal with massive databases and queries against them are a difficult problem.
I’m not aware of studies of link rot with relationship to online news.
Who steps up to preserve major data dumps like Snowden, PanamaPapers, or email breaches?
Social media is a collection of observations and small facts without necessarily being journalism.
Journalism is a deliberate act and is meant to be public while social media is not.
We need to come up with a consensus about what parts of social media should be preserved as news..
News does often delve into social media as part of its evidence base now.
Responsible journalism should include archival storage, but it doesn’t yet.
Under current law, we can’t protect a lot of this material without the permission of the creator(s).
The Library of Congress can demand deposit, but doesn’t.
With funding issues, I’m not wild about the Library of Congress being the only entity [for storage.]
In the UK, there are multiple repositories.
testing to see if I’m still live
What happens if you livetweet too much in one day.
#DtMH2016 @ChrisAldrich: The #indieweb could help archivists in the future in accessing more toxic waste.
@TMWestervelt If you want, I think I may have gotten the audio from that section at http://boffosocko.com/2016/10/13/notes-from-day-1-of-dodging-the-memory-hole-saving-online-news-thursday-october-13-2016/ JDNA may also have video too.
Thx great to hear. I see path forward in Regina Roberts “ways of making money off of content”: shared value in preservation