IndieWeb Journalism in the Wild

I noticed a few days ago that professor and writer John Naughton not only has his own website but that he’s posting both his own content to it as well as (excerpted) content he’s writing for other journalistic outlets, lately in his case for The Guardian. This is awesome for so many reasons. The primary reason is that I can follow him via his own site and get not only his personally posted content, which informs his longer pieces, but I don’t need to follow him in multiple locations to get the “firehose” of everything he’s writing and thinking about. While The Guardian and The Observer are great, perhaps I don’t want to filter through multiple hundreds of articles to find his particular content or potentially risk missing it?  What if he was writing for 5 or more other outlets? Then I’d need to delve in deeper still and carry a multitude of subscriptions and their attendant notifications to get something that should rightly emanate from one location–him! While he may not be posting his status updates or Tweets to his own website first–as I do–I’m at least able to get the best and richest of his content in one place. Additionally, the way he’s got things set up, The Guardian and others are still getting the clicks (for advertising sake) while I still get the simple notifications I’d like to have so I’m not missing what he writes.

His site certainly provides an interesting example of either POSSE or PESOS in the wild, particularly from an IndieWeb for Journalism or even an IndieWeb for Education perspective. I suspect his article posts occur on the particular outlet first and he’s excerpting them with a link to that “original”. (Example: A post on his site with a link to a copy on The Guardian.) I’m not sure whether he’s (ideally) physically archiving the full post there on his site (and hiding it privately as both a personal and professional portfolio of sorts) or if they’re all there on the respective pages, but just hidden behind the “read more” button he’s providing. I will note that his WordPress install is giving a rel=”canonical link to itself rather than the version at The Guardian, which also has a rel=”canonical” link on it. I’m curious to take a look at how Google indexes and ranks the two pages as a result.

In any case, this is a generally brilliant set up for any researcher, professor, journalist, or other stripe of writer for providing online content, particularly when they may be writing for a multitude of outlets.

I’ll also note that I appreciate the ways in which it seems he’s using his website almost as a commonplace book. This provides further depth into his ideas and thoughts to see what sources are informing and underlying his other writing.

Alas, if only the rest of the world used the web this way…

Reply to Justin Heideman on Twitter

Replied to a tweet by Justin Heideman (Twitter)
There are some interesting thoughts here about archiving news pages online. It also subtly highlights the importance of having one’s own domain to be able to redirect pages from their originals to archived versions, possibly containing different technological support. This article is sure to be of interest to folks in the Journalism Digital News Archive/Dodging the Memory Hole Camp (#DtMH2017)

🎧 ‘The Daily’: Taking Over Local News | The New York Times

Listened to ‘The Daily’: Taking Over Local News by Michael Barbaro from nytimes.com

On local TV stations across the United States, news anchors have been delivering the exact same message to their viewers. “Our greatest responsibility,” they begin by saying, “is to serve our communities.”

But what they are being forced to say next has left many questioning whom those stations are really being asked to serve.



On today’s episode:

• Sydney Ember, a New York Times business reporter who covers print and digital media.

• Aaron Weiss, who worked several years ago as a news director for Sinclair in Sioux City, Iowa.

Background reading:

• Anchors at local news stations across the country made identical comments about media bias. The script came from their owner, Sinclair Broadcast Group.

• David D. Smith, the chairman of Sinclair Broadcast Group, said his stations were no different from network news outlets.

• The largest owner of local TV stations, Sinclair has a history of supporting Republican causes.

🔖 CNN Lite

Bookmarked CNN Lite (lite.cnn.io)
An uber low bandwidth and text only version of CNN. View the latest news and breaking news today for U.S., world, weather, entertainment, politics and health at CNN.com.
I just ran across a text-only version of CNN and I’m really wishing that more websites would do this. It’s like AMP, but even leaner!

👓 Congrats, Jeff Goldberg. You Just Martyred Kevin Williamson | POLITICO

Read Congrats, Jeff Goldberg. You Just Martyred Kevin Williamson. by Jack Shafer (POLITICO Magazine)
The <i>Atlantic</i> climbed out on a limb by adding Williamson to its staff. Then they proceeded to saw off the branch.
I noted the hire of Williamson with curiosity when it happened, but I expected it might last a tad longer than this. At least he managed longer than Quinn Norton did at the New York Times, but both seemingly gone for relatively similar reasons.

👓 It’s Time For an RSS Revival | Wired

Read It's Time For an RSS Revival (WIRED)
After years of letting algorithms make up our minds for us, the time is right to go back to basics.
This article, which I’ve seen shared almost too widely on the internet since it came out, could almost have been written any time in the past decade really. They did do a somewhat better job of getting quotes from some of the big feed readers’ leaders to help to differentiate their philosophical differences, but there wasn’t much else here. Admittedly they did have a short snippet about Dave Winer’s new feedbase product, which I suspect, in combination with the recent spate of articles about Facebook’s Cambridge Analytica scandal, motivated the article. (By the way, I love OPML as much as anyone could, but feedbase doesn’t even accept the OPML feeds out of my  core WordPress install though most feed readers do, which makes me wonder how successful feedbase might be in the long run without better legacy spec support.)

So what was missing from Wired’s coverage? More details on what has changed in the space in the past several years. There’s been a big movement afoot in the IndieWeb community which has been espousing a simpler and more DRY (don’t repeat yourself) version of feeds using simple semantic microformats markup like h-feed. There’s also been the emergence of JSON feed in the past year which many of the major feed readers already support.

On the front of people leaving Facebook (and their black box algorithmic monster that determines what you read rather than you making an implicit choice), they might have mentioned people who are looking for readers through which they can also use their own domains and websites where they own and maintain their own data for interaction. I’ve written about this in more depth last year: Feed reader revolution.

One of the more bleeding edge developments which I think is going to drastically change the landscape in the coming years for developers, feed readers, and the internet consumption space is the evolving Microsub spec which is being spearheaded by a group of projects known as the Aperture microsub server and the Together and Indigenous clients which already use it. Microsub is going to abstract away many of the technical hurdles that make it far more difficult to build a full-fledged feed reader. I have a feeling it’s going to level a lot of the playing field to allow a Cambrian explosion of readers and social related software to better leverage more easily reading content on the web without relying on third party black box services which people have been learning they cannot fully trust anymore. Aaron Parecki has done an excellent job of laying out some parts of it in Building an IndieWeb Reader as well as in recent episodes of his Percolator microcast. This lower hurdle is going to result in fewer people needing to rely solely on the biggest feed readers like Facebook, Twitter, and Instagram for both consuming content and posting their own content. The easier it becomes for people to use other readers to consume content from almost anywhere on the web, the less a monopoly the social networks will have on our lives.

I truly hope Wired circles around and gives some of these ideas additional follow up coverage in the coming months. They owe it to their readership to expand their coverage from what we all knew five years ago. If they want to go a step or two further, they might compare the web we had 15 years ago to some of the new and emerging open web technologies that are starting to take hold today.

🎧 This Week in Google 449 Grackles, Nuthatches, and Swifts, Oh My! | TWiT.TV

Listened to This Week in Google 449 Grackles, Nuthatches, and Swifts, Oh My! | TWiT.TV by Leo Laporte, Jeff Jarvis, Stacey Higginbotham from TWiT.tv

Facebook and the Cambridge Analytica scandal. Google News Initiative will fight fake journalism. Uber self-driving car not at fault for killing pedestrian. Congress passes SESTSA/FOSTA. The city that banned bitcoin mining.

  • Jeff's Number: Amazon is #2
  • Stacey's Thing: Alexa Kids Court
  • Leo's Tool: Samsung My BP Lab

📺 Zeynep Tufekci: We’re building a dystopia just to make people click on ads | TED

Watched We're building a dystopia just to make people click on ads by Zeynep TufekciZeynep Tufekci from ted.com

We're building an artificial intelligence-powered dystopia, one click at a time, says techno-sociologist Zeynep Tufekci. In an eye-opening talk, she details how the same algorithms companies like Facebook, Google and Amazon use to get you to click on ads are also used to organize your access to political and social information. And the machines aren't even the real threat. What we need to understand is how the powerful might use AI to control us -- and what we can do in response.

👓 The Google News Initiative: Building a stronger future for news | Google

This article is even more interesting in light of the other Google blog post I read earlier today entitled Introducing Subscribe with Google. Was today’s roll out pre-planned or is Google taking an earlier advantage of Facebook’s poor position this week after the “non-data breach” stories that have been running this past week?

There’s a lot of puffery rhetoric here to make Google look more like an arriving hero, but I’d recommend taking with more than a few grains of salt.

Highlights, Quotes, & Marginalia

It’s becoming increasingly difficult to distinguish what’s true (and not true) online.

we’re committing $300 million toward meeting these goals.

I’m curious what their internal projections are for ROI?


People come to Google looking for information they can trust, and that information often comes from the reporting of journalists and news organizations around the world.

Heavy hit in light of the Facebook data scandal this week on top of accusations about fake news spreading.


That’s why it’s so important to us that we help you drive sustainable revenue and businesses.

Compared to Facebook which just uses your content to drive you out of business like it did for Funny or Die.
Reference: How Facebook is Killing Comedy


we drove 10 billion clicks a month to publishers’ websites for free.

Really free? Or was this served against ads in search?


We worked with the industry to launch the open-source Accelerated Mobile Pages Project to improve the mobile web

There was some collaborative outreach, but AMP is really a Google-driven spec without significant outside input.

See also: http://ampletter.org/


We’re now in the early stages of testing a “Propensity to Subscribe” signal based on machine learning models in DoubleClick to make it easier for publishers to recognize potential subscribers, and to present them the right offer at the right time.

Interestingly the technology here isn’t that different than the Facebook Data that Cambridge Analytica was using, the difference is that they’re not using it to directly impact politics, but to drive sales. Does this mean they’re more “ethical”?


With AMP Stories, which is now in beta, publishers can combine the speed of AMP with the rich, immersive storytelling of the open web.

Is this sentence’s structure explicitly saying that AMP is not “open web”?!

👓 Introducing Subscribe with Google | Google

Interesting to see this roll out as Facebook is having some serious data collection problems. This looks a bit like a means for Google to directly link users with content they’re consuming online and then leveraging it much the same way that Facebook was with apps and companies like Cambridge Analytica.

Highlights, Quotes, & Marginalia

Paying for a subscription is a clear indication that you value and trust your subscribed publication as a source. So we’ll also highlight those sources across Google surfaces


So Subscribe with Google will also allow you to link subscriptions purchased directly from publishers to your Google account—with the same benefits of easier and more persistent access.


you can then use “Sign In with Google” to access the publisher’s products, but Google does the billing, keeps your payment method secure, and makes it easy for you to manage your subscriptions all in one place.

I immediately wonder who owns my related subscription data? Is the publisher only seeing me as a lumped Google proxy or do they get may name, email address, credit card information, and other details?

How will publishers be able (or not) to contact me? What effect will this have on potential customer retention?

🔖 [1803.03443] Fake news propagate differently from real news even at early stages of spreading

Bookmarked Fake news propagate differently from real news even at early stages of spreading by Zilong Zhao, Jichang Zhao, Yukie Sano, Orr Levy, Hideki Takayasu, Misako Takayasu, Daqing Li, Shlomo Havlin (arxiv.org)
Social media can be a double-edged sword for modern communications, either a convenient channel exchanging ideas or an unexpected conduit circulating fake news through a large population. Existing studies of fake news focus on efforts on theoretical modelling of propagation or identification methods based on black-box machine learning, neglecting the possibility of identifying fake news using only structural features of propagation of fake news compared to those of real news and in particular the ability to identify fake news at early stages of propagation. Here we track large databases of fake news and real news in both, Twitter in Japan and its counterpart Weibo in China, and accumulate their complete traces of re-posting. It is consistently revealed in both media that fake news spreads distinctively, even at early stages of spreading, in a structure that resembles multiple broadcasters, while real news circulates with a dominant source. A novel predictability feature emerges from this difference in their propagation networks, offering new paths of early detection of fake news in social media. Instead of commonly used features like texts or users for fake news identification, our finding demonstrates collective structural signals that could be useful for filtering out fake news at early stages of their propagation evolution.

👓 Most major outlets have used Russian tweets as sources for partisan opinion: study | Columbia Journalism Review

Read Most major outlets have used Russian tweets as sources for partisan opinion: study by Josephine Lukito and Chris Wells (Columbia Journalism Review)
In a new study at the University of Wisconsin-Madison, we look at how often, and in what context, Twitter accounts from the Internet Research Agency—a St. Petersburg-based organization directed by individuals with close ties to Vladimir Putin, and subject to Mueller’s scrutiny—successfully made their way from social media into respected journalistic media. We searched the content of 33 major American news outlets for references to the 100 most-retweeted accounts among those Twitter identified as controlled by the IRA, from the beginning of 2015 through September 2017. We found at least one tweet from an IRA account embedded in 32 of the 33 outlets—a total of 116 articles—including in articles published by institutions with longstanding reputations, like The Washington Post, NPR, and the Detroit Free Press, as well as in more recent, digitally native outlets such as BuzzFeed, Salon, and Mic (the outlet without IRA-linked tweets was Vice).
How are outlets publishing generic tweets without verifying the users actually exist? This opens up a new type of journalistic fraud in which a writer could keep an army of bots and feed out material that they could then self-quote for their own needs without a story really existing.

👓 For Two Months, I Got My News From Print Newspapers. Here’s What I Learned. | New York Times

Read For Two Months, I Got My News From Print Newspapers. Here’s What I Learned. by Farhad Manjoo (nytimes.com)
Our tech columnist tried to skip digital news for a while. His old-school experiment led to three main conclusions.
A somewhat link-baity headline, but overall a nice little article with some generally solid advice. I always thought that even the daily paper was at too quick a pace and would much prefer a weekly or monthly magazine that does a solid recap of all the big stories and things one ought to know, that way the stories had had some time to simmer and all the details had time to come out. Kind of like reading longer form non-fiction of periods of history, just done on a somewhat shorter timescale.

👓 Crowdsourcing trusted news sources can work — but not the way Facebook says it’ll do it | Nieman Journalism Lab

Read Crowdsourcing trusted news sources can work — but not the way Facebook says it’ll do it by Laura Hazard Owen (Nieman Lab)
A new study finds asking Facebook users about publishers could "be quite effective in decreasing the amount of misinformation and disinformation circulating on social media" — but Facebook will need to make one important change to its plan.

👓 Can’t Get Your News From Facebook Anymore? Try These 6 Apps | Wired

Read Can't Get Your News From Facebook Anymore? Try These 6 Apps by Josie Colt (WIRED)
Now that the social network is changing what shows up in your feed, you’ll have to go elsewhere for current news.
I’ll particularly agree with how good I find Nuzzel to be, though I will say that I do take heavy advantage of a variety of highly curated Twitter lists which I’m sure helps the algorithm for the quality of news I get back out of the system.

I would prefer more transparency about how those that use algorithms are doing so.

Some of these don’t amount to much more than glorified RSS feed readers, and I’m shocked that the state of the art of the area isn’t much further along than it was a decade ago.