The families of children killed at Sandy Hook Elementary School in Newtown, Conn., in 2012 are suing a conspiracy theorist who claims the massacre was a hoax. Their lawsuits are bringing the issue of “fake news” to the courts.
On today’s episode:
• Elizabeth Williamson, a reporter in the Washington bureau of The New York Times.
• The families of eight Sandy Hook victims, as well as an F.B.I. agent who responded to the massacre, are suing the conspiracy theorist Alex Jones for defamation. Relatives of the victims have received death threats from those who embrace the falsehoods Mr. Jones has propagated on his website Infowars, which has an audience of millions.
A series of damning posts on Facebook has stoked longstanding ethnic tensions in Sri Lanka, setting off a wave of violence largely directed at Muslims. How are false rumors on social media fueling real-world attacks?
On today’s episode:
• Fraudulent claims of a Muslim plot to wipe out Sri Lanka’s Buddhist majority, widely circulated on Facebook and WhatsApp, have led to attacks on mosques and Muslim-owned homes and shops in the country.
• Facebook’s algorithm-driven news feed promotes whatever content draws the most engagement — which tend to be the posts that provoke negative, primal emotions like fear and anger. The platform has allowed misinformation to run rampant in countries with weak institutions and a history of deep social distrust.
This article is even more interesting in light of the other Google blog post I read earlier today entitled Introducing Subscribe with Google. Was today’s roll out pre-planned or is Google taking an earlier advantage of Facebook’s poor position this week after the “non-data breach” stories that have been running this past week?
There’s a lot of puffery rhetoric here to make Google look more like an arriving hero, but I’d recommend taking with more than a few grains of salt.
Highlights, Quotes, & Marginalia
It’s becoming increasingly difficult to distinguish what’s true (and not true) online.
we’re committing $300 million toward meeting these goals.
I’m curious what their internal projections are for ROI?
People come to Google looking for information they can trust, and that information often comes from the reporting of journalists and news organizations around the world.
Heavy hit in light of the Facebook data scandal this week on top of accusations about fake news spreading.
That’s why it’s so important to us that we help you drive sustainable revenue and businesses.
Compared to Facebook which just uses your content to drive you out of business like it did for Funny or Die.
Reference: How Facebook is Killing Comedy
we drove 10 billion clicks a month to publishers’ websites for free.
Really free? Or was this served against ads in search?
We worked with the industry to launch the open-source Accelerated Mobile Pages Project to improve the mobile web
There was some collaborative outreach, but AMP is really a Google-driven spec without significant outside input.
See also: http://ampletter.org/
We’re now in the early stages of testing a “Propensity to Subscribe” signal based on machine learning models in DoubleClick to make it easier for publishers to recognize potential subscribers, and to present them the right offer at the right time.
Interestingly the technology here isn’t that different than the Facebook Data that Cambridge Analytica was using, the difference is that they’re not using it to directly impact politics, but to drive sales. Does this mean they’re more “ethical”?
With AMP Stories, which is now in beta, publishers can combine the speed of AMP with the rich, immersive storytelling of the open web.
Is this sentence’s structure explicitly saying that AMP is not “open web”?!Syndicated copies to:
Social media can be a double-edged sword for modern communications, either a convenient channel exchanging ideas or an unexpected conduit circulating fake news through a large population. Existing studies of fake news focus on efforts on theoretical modelling of propagation or identification methods based on black-box machine learning, neglecting the possibility of identifying fake news using only structural features of propagation of fake news compared to those of real news and in particular the ability to identify fake news at early stages of propagation. Here we track large databases of fake news and real news in both, Twitter in Japan and its counterpart Weibo in China, and accumulate their complete traces of re-posting. It is consistently revealed in both media that fake news spreads distinctively, even at early stages of spreading, in a structure that resembles multiple broadcasters, while real news circulates with a dominant source. A novel predictability feature emerges from this difference in their propagation networks, offering new paths of early detection of fake news in social media. Instead of commonly used features like texts or users for fake news identification, our finding demonstrates collective structural signals that could be useful for filtering out fake news at early stages of their propagation evolution.
The furore over Fake News is really about the seizures caused by overactivity in these synapses - confabulation and hallucination in the global brain of mutual media. With popularity always following a power law, runaway memetic outbreaks can become endemic, especially when the platform is doing what it can to accelerate them without any sense of their context or meaning.
One might think that Facebook (and others) could easily analyze the things within their network that are getting above average reach and filter out or tamp down the network effects of the most damaging things which in the long run I suspect are going to damage their network overall.
Our synapses have the ability to minimize feedback loops and incoming signals which have deleterious effects–certainly our social networks could (and should) have these features as well.Syndicated copies to:
Over the course of the campaign, the comments left on the president’s official Facebook page increasingly employed the rhetoric of white nationalism.
These mettle tests are going to come more quickly than we thought, I guess. HarperCollins: you're up!
Does Facebook have a responsibility to weed out fake news stories? Google releases PhotoScan to digitize your old pictures. Google Translate gets some machine learning improvements. Twitter kicks out alt-right users. BLU phones sending user info to China. Snapchat Spectacles will be available in Tulsa next - and Snapchat files a secret IPO. Stacey's Thing: June oven Jeff's Number: Facebook's new measurement strategies Leo's Thing: 2016 MacBook Pro
A great episode as usual. The discussion at the beginning on the fake news issue in the media recently was particularly good.
Facebook is apparently asking users to rate the quality of news stories on its service, after facing criticism for allowing fake or misleading news. At least three people on Twitter have posted surveys that ask whether a headline “uses misleading language” or “withholds key details of the story.” The earliest one we’ve seen was posted on December 2nd, and asked about a story from UK comedy site Chortle. Two others reference stories by Rolling Stone and The Philadelphia Inquirer.
PolitEcho shows you the political biases of your Facebook friends and news feed. The app assigns each of your friends a score based on our prediction of their political leanings then displays a graph of your friend list. Then it calculates the political bias in the content of your news feed and compares it with the bias of your friends list to highlight possible differences between the two.
The Washington Post recently published an article about social media metrics with an alarmist headline: 6 in 10 of you will share this link without reading it, a new, depressing study says This story then predictably made the rounds in the blogosphere, from Gizmodo to Marketing Dive. The headline reads like self-referential clickbait, daring readers to click on the provocative …
Fake news is the easiest of the problems to fix.
…a new set of ways to report and share news could arise: a social network where the sources of articles were highlighted rather than the users sharing them. A platform that makes it easier to read a full story than to share one unread. A news feed that provides alternative sources and analysis beneath every shared article.
This sounds like the kind of platforms I’d like to have. Reminiscent of some of the discussion at the beginning of This Week in Google: episode 379 Ixnay on the Eet-tway.
I suspect that some of the recent coverage of “fake news” and how it’s being shared on social media has prompted me to begin using Reading.am, a bookmarking-esqe service that commands that users to:
Share what you’re reading. Not what you like. Not what you find interesting. Just what you’re reading.
Naturally, in IndieWeb fashion, I’m also posting these read articles to my site. While bookmarks are things that I would implicitly like to read in the near future (rather than “Christmas ornaments” I want to impress people with on my “social media Christmas tree”), there’s a big difference between them and things that I’ve actually read through and thought about.
I always feel like many of my family, friends, and the general public click “like” or “share” on articles in social media without actually having read them from top to bottom. Research would generally suggest that I’m not wrong.   Some argue that the research needs to be more subtle too.  I generally refuse to participate in this type of behavior if I can avoid it.
Some portion of what I physically read isn’t shared, but at least those things marked as “read” here on my site are things that I’ve actually gone through the trouble to read from start to finish. When I can, I try to post a few highlights I found interesting along with any notes/marginalia (lately I’m loving the service Hypothes.is for doing this) on the piece to give some indication of its interest. I’ll also often try to post some of my thoughts on it, as I’m doing here.
Gauging Intent of Social Signals
I feel compelled to mention here that on some platforms like Twitter, that I don’t generally use the “like” functionality there to indicate that I’ve actually liked a tweet itself or any content that’s linked to in it. In fact, I’ve often not read anything related to the tweet but the simple headline presented in the tweet itself.
The majority of the time I’m liking/favoriting something on Twitter, it’s because I’m using an IFTTT.com applet which takes the tweets I “like” and saves them to my Pocket account where I come back to them later to read. It’s not the case that I actually read everything in my pocket queue, but those that I do read will generally appear on my site.
There are however, some extreme cases in which pieces of content are a bit beyond the pale for indicating a like on, and in those cases I won’t do so, but will manually add them to my reading queue. For some this may create some grey area about my intent when viewing things like my Twitter likes. Generally I’d recommend people view that feed as a generic linkblog of sorts. On Twitter, I far more preferred the nebulous star indicator over the current heart for indicating how I used and continue to use that bit of functionality.
I’ll also mention that I sometimes use the like/favorite functionality on some platforms to indicate to respondents that I’ve seen their post/reply. This type of usage could also be viewed as a digital “Thank You”, “hello”, or even “read receipt” of sorts since I know that the “like” intent is pushed into their notifications feed. I suspect that most recipients receive these intents as I intend them though the Twitter platform isn’t designed for this specifically.
I wish that there was a better way for platforms and their readers to better know exactly what the intent of the users’ was rather than trying to intuit them. It would be great if Twitter had the ability to allow users multiple options under each tweet to better indicate whether their intent was to bookmark, like, or favorite it, or to indicate that they actually read/watched the content on the other end of the link in the tweet.
In true IndieWeb fashion, because I can put these posts on my own site, I can directly control not only what I post, but I can be far more clear about why I’m posting it and give a better idea about what it means to me. I can also provide footnotes to allow readers to better see my underlying sources and judge for themselves their authenticity and actual gravitas. As a result, hopefully you’ll find no fake news here.
Of course some of the ensuing question is: “How does one scale this type of behaviour up?”