Blame Google, for a start.
Nothing great or new here. Also no real solutions, though knowing some of the history and the problems, does help suggest possible solutions.Syndicated copies to:
Recorded live Saturday, May 13, 2017. The Gang takes nothing off the table as Doc describes a near future of personal APIs and CustomerTech.
Keith outlines an excellent thesis about media moving from “one to many” to increasingly becoming “one to one”. It points out the issue for areas like journalism, which can become so individualized, and democracy which often rely on being able to see the messages that are given out to the masses being consistent. One of the issues with Facebook and the Cambridge Analytica problem is that many people were getting algrorithmic customized messages (true or not) that had the ability to nudge them in certain directions. This creates a lot more control on the part of major corporations which would have been far less likely when broadcasting the exact same message to millions. In the latter case, the message for the masses can be discussed, analyzed, picked apart, and dealt with because it is known. In the former case, no one knows what the message was except for the person who received it and it’s far less likely that they analyzed and discussed it in the same way that it would have been previously.
In the last portion of the show, Doc leads with some discussion about identity and privacy from the buyer’s perspective. Companies selling widgets don’t necessarily need to collect massive amounts of data about us to sell widgets. It’s the seller’s perspective and the over-reliance on advertising which has created the capitalism surveillance state we’re sadly living within now.
In the closing minutes of the show Steve re-iterated that the show was a podcast, but that it’s now all about streaming and as such, there is no longer an audio podcast version of the show. I’ll have something to say about this shortly for those looking for alternatives, because this just drives me crazy…Syndicated copies to:
An inside look at the inner workings of a technology you may take for granted
A topic which is tremendously overlooked in the CMS world, but which can provide a lot of power.
h/t Jorge SpinozaSyndicated copies to:
I noticed a few days ago that professor and writer John Naughton not only has his own website but that he’s posting both his own content to it as well as (excerpted) content he’s writing for other journalistic outlets, lately in his case for The Guardian. This is awesome for so many reasons. The primary reason is that I can follow him via his own site and get not only his personally posted content, which informs his longer pieces, but I don’t need to follow him in multiple locations to get the “firehose” of everything he’s writing and thinking about. While The Guardian and The Observer are great, perhaps I don’t want to filter through multiple hundreds of articles to find his particular content or potentially risk missing it? What if he was writing for 5 or more other outlets? Then I’d need to delve in deeper still and carry a multitude of subscriptions and their attendant notifications to get something that should rightly emanate from one location–him! While he may not be posting his status updates or Tweets to his own website first–as I do–I’m at least able to get the best and richest of his content in one place. Additionally, the way he’s got things set up, The Guardian and others are still getting the clicks (for advertising sake) while I still get the simple notifications I’d like to have so I’m not missing what he writes.
His site certainly provides an interesting example of either POSSE or PESOS in the wild, particularly from an IndieWeb for Journalism or even an IndieWeb for Education perspective. I suspect his article posts occur on the particular outlet first and he’s excerpting them with a link to that “original”. (Example: A post on his site with a link to a copy on The Guardian.) I’m not sure whether he’s (ideally) physically archiving the full post there on his site (and hiding it privately as both a personal and professional portfolio of sorts) or if they’re all there on the respective pages, but just hidden behind the “read more” button he’s providing. I will note that his WordPress install is giving a rel=”canonical“ link to itself rather than the version at The Guardian, which also has a rel=”canonical” link on it. I’m curious to take a look at how Google indexes and ranks the two pages as a result.
In any case, this is a generally brilliant set up for any researcher, professor, journalist, or other stripe of writer for providing online content, particularly when they may be writing for a multitude of outlets.
I’ll also note that I appreciate the ways in which it seems he’s using his website almost as a commonplace book. This provides further depth into his ideas and thoughts to see what sources are informing and underlying his other writing.
Alas, if only the rest of the world used the web this way…Syndicated copies to:
There are some interesting thoughts here about archiving news pages online. It also subtly highlights the importance of having one’s own domain to be able to redirect pages from their originals to archived versions, possibly containing different technological support. This article is sure to be of interest to folks in the Journalism Digital News Archive/Dodging the Memory Hole Camp (#DtMH2017)Syndicated copies to:
On local TV stations across the United States, news anchors have been delivering the exact same message to their viewers. “Our greatest responsibility,” they begin by saying, “is to serve our communities.”
But what they are being forced to say next has left many questioning whom those stations are really being asked to serve.
On today’s episode:
• Sydney Ember, a New York Times business reporter who covers print and digital media.
• Aaron Weiss, who worked several years ago as a news director for Sinclair in Sioux City, Iowa.
• Anchors at local news stations across the country made identical comments about media bias. The script came from their owner, Sinclair Broadcast Group.
• David D. Smith, the chairman of Sinclair Broadcast Group, said his stations were no different from network news outlets.
• The largest owner of local TV stations, Sinclair has a history of supporting Republican causes.
An uber low bandwidth and text only version of CNN. View the latest news and breaking news today for U.S., world, weather, entertainment, politics and health at CNN.com.
I just ran across a text-only version of CNN and I’m really wishing that more websites would do this. It’s like AMP, but even leaner!Syndicated copies to:
The <i>Atlantic</i> climbed out on a limb by adding Williamson to its staff. Then they proceeded to saw off the branch.
I noted the hire of Williamson with curiosity when it happened, but I expected it might last a tad longer than this. At least he managed longer than Quinn Norton did at the New York Times, but both seemingly gone for relatively similar reasons.Syndicated copies to:
After years of letting algorithms make up our minds for us, the time is right to go back to basics.
This article, which I’ve seen shared almost too widely on the internet since it came out, could almost have been written any time in the past decade really. They did do a somewhat better job of getting quotes from some of the big feed readers’ leaders to help to differentiate their philosophical differences, but there wasn’t much else here. Admittedly they did have a short snippet about Dave Winer’s new feedbase product, which I suspect, in combination with the recent spate of articles about Facebook’s Cambridge Analytica scandal, motivated the article. (By the way, I love OPML as much as anyone could, but feedbase doesn’t even accept the OPML feeds out of my core WordPress install though most feed readers do, which makes me wonder how successful feedbase might be in the long run without better legacy spec support.)
So what was missing from Wired’s coverage? More details on what has changed in the space in the past several years. There’s been a big movement afoot in the IndieWeb community which has been espousing a simpler and more DRY (don’t repeat yourself) version of feeds using simple semantic microformats markup like h-feed. There’s also been the emergence of JSON feed in the past year which many of the major feed readers already support.
On the front of people leaving Facebook (and their black box algorithmic monster that determines what you read rather than you making an implicit choice), they might have mentioned people who are looking for readers through which they can also use their own domains and websites where they own and maintain their own data for interaction. I’ve written about this in more depth last year: Feed reader revolution.
One of the more bleeding edge developments which I think is going to drastically change the landscape in the coming years for developers, feed readers, and the internet consumption space is the evolving Microsub spec which is being spearheaded by a group of projects known as the Aperture microsub server and the Together and Indigenous clients which already use it. Microsub is going to abstract away many of the technical hurdles that make it far more difficult to build a full-fledged feed reader. I have a feeling it’s going to level a lot of the playing field to allow a Cambrian explosion of readers and social related software to better leverage more easily reading content on the web without relying on third party black box services which people have been learning they cannot fully trust anymore. Aaron Parecki has done an excellent job of laying out some parts of it in Building an IndieWeb Reader as well as in recent episodes of his Percolator microcast. This lower hurdle is going to result in fewer people needing to rely solely on the biggest feed readers like Facebook, Twitter, and Instagram for both consuming content and posting their own content. The easier it becomes for people to use other readers to consume content from almost anywhere on the web, the less a monopoly the social networks will have on our lives.
I truly hope Wired circles around and gives some of these ideas additional follow up coverage in the coming months. They owe it to their readership to expand their coverage from what we all knew five years ago. If they want to go a step or two further, they might compare the web we had 15 years ago to some of the new and emerging open web technologies that are starting to take hold today.Syndicated copies to:
We are launching the Google News Initiative, an effort to help journalism thrive in the digital age.
This article is even more interesting in light of the other Google blog post I read earlier today entitled Introducing Subscribe with Google. Was today’s roll out pre-planned or is Google taking an earlier advantage of Facebook’s poor position this week after the “non-data breach” stories that have been running this past week?
There’s a lot of puffery rhetoric here to make Google look more like an arriving hero, but I’d recommend taking with more than a few grains of salt.
It’s becoming increasingly difficult to distinguish what’s true (and not true) online.
we’re committing $300 million toward meeting these goals.
I’m curious what their internal projections are for ROI?
People come to Google looking for information they can trust, and that information often comes from the reporting of journalists and news organizations around the world.
Heavy hit in light of the Facebook data scandal this week on top of accusations about fake news spreading.
That’s why it’s so important to us that we help you drive sustainable revenue and businesses.
Compared to Facebook which just uses your content to drive you out of business like it did for Funny or Die.
Reference: How Facebook is Killing Comedy
we drove 10 billion clicks a month to publishers’ websites for free.
Really free? Or was this served against ads in search?
We worked with the industry to launch the open-source Accelerated Mobile Pages Project to improve the mobile web
There was some collaborative outreach, but AMP is really a Google-driven spec without significant outside input.
See also: http://ampletter.org/
We’re now in the early stages of testing a “Propensity to Subscribe” signal based on machine learning models in DoubleClick to make it easier for publishers to recognize potential subscribers, and to present them the right offer at the right time.
Interestingly the technology here isn’t that different than the Facebook Data that Cambridge Analytica was using, the difference is that they’re not using it to directly impact politics, but to drive sales. Does this mean they’re more “ethical”?
With AMP Stories, which is now in beta, publishers can combine the speed of AMP with the rich, immersive storytelling of the open web.
Is this sentence’s structure explicitly saying that AMP is not “open web”?!Syndicated copies to:
Making digital subscriptions simple by making it easier to subscribe and enjoy premium content
Interesting to see this roll out as Facebook is having some serious data collection problems. This looks a bit like a means for Google to directly link users with content they’re consuming online and then leveraging it much the same way that Facebook was with apps and companies like Cambridge Analytica.
Paying for a subscription is a clear indication that you value and trust your subscribed publication as a source. So we’ll also highlight those sources across Google surfaces
So Subscribe with Google will also allow you to link subscriptions purchased directly from publishers to your Google account—with the same benefits of easier and more persistent access.
you can then use “Sign In with Google” to access the publisher’s products, but Google does the billing, keeps your payment method secure, and makes it easy for you to manage your subscriptions all in one place.
I immediately wonder who owns my related subscription data? Is the publisher only seeing me as a lumped Google proxy or do they get may name, email address, credit card information, and other details?
How will publishers be able (or not) to contact me? What effect will this have on potential customer retention?Syndicated copies to:
Social media can be a double-edged sword for modern communications, either a convenient channel exchanging ideas or an unexpected conduit circulating fake news through a large population. Existing studies of fake news focus on efforts on theoretical modelling of propagation or identification methods based on black-box machine learning, neglecting the possibility of identifying fake news using only structural features of propagation of fake news compared to those of real news and in particular the ability to identify fake news at early stages of propagation. Here we track large databases of fake news and real news in both, Twitter in Japan and its counterpart Weibo in China, and accumulate their complete traces of re-posting. It is consistently revealed in both media that fake news spreads distinctively, even at early stages of spreading, in a structure that resembles multiple broadcasters, while real news circulates with a dominant source. A novel predictability feature emerges from this difference in their propagation networks, offering new paths of early detection of fake news in social media. Instead of commonly used features like texts or users for fake news identification, our finding demonstrates collective structural signals that could be useful for filtering out fake news at early stages of their propagation evolution.
In a new study at the University of Wisconsin-Madison, we look at how often, and in what context, Twitter accounts from the Internet Research Agency—a St. Petersburg-based organization directed by individuals with close ties to Vladimir Putin, and subject to Mueller’s scrutiny—successfully made their way from social media into respected journalistic media. We searched the content of 33 major American news outlets for references to the 100 most-retweeted accounts among those Twitter identified as controlled by the IRA, from the beginning of 2015 through September 2017. We found at least one tweet from an IRA account embedded in 32 of the 33 outlets—a total of 116 articles—including in articles published by institutions with longstanding reputations, like The Washington Post, NPR, and the Detroit Free Press, as well as in more recent, digitally native outlets such as BuzzFeed, Salon, and Mic (the outlet without IRA-linked tweets was Vice).
How are outlets publishing generic tweets without verifying the users actually exist? This opens up a new type of journalistic fraud in which a writer could keep an army of bots and feed out material that they could then self-quote for their own needs without a story really existing.Syndicated copies to:
Our tech columnist tried to skip digital news for a while. His old-school experiment led to three main conclusions.
A somewhat link-baity headline, but overall a nice little article with some generally solid advice. I always thought that even the daily paper was at too quick a pace and would much prefer a weekly or monthly magazine that does a solid recap of all the big stories and things one ought to know, that way the stories had had some time to simmer and all the details had time to come out. Kind of like reading longer form non-fiction of periods of history, just done on a somewhat shorter timescale.Syndicated copies to: