👓 Why Facts Don’t Change Our Minds | The New Yorker

Why Facts Don’t Change Our Minds by Elizabeth Kolbert (The New Yorker)
New discoveries about the human mind show the limitations of reason.
The vaunted human capacity for reason may have more to do with winning arguments than with thinking straight.
The vaunted human capacity for reason may have more to do with winning arguments than with thinking straight. Credit Illustration by Gérard DuBois

In 1975, researchers at Stanford invited a group of undergraduates to take part in a study about suicide. They were presented with pairs of suicide notes. In each pair, one note had been composed by a random individual, the other by a person who had subsequently taken his own life. The students were then asked to distinguish between the genuine notes and the fake ones.

Some students discovered that they had a genius for the task. Out of twenty-five pairs of notes, they correctly identified the real one twenty-four times. Others discovered that they were hopeless. They identified the real note in only ten instances.

As is often the case with psychological studies, the whole setup was a put-on. Though half the notes were indeed genuine—they’d been obtained from the Los Angeles County coroner’s office—the scores were fictitious. The students who’d been told they were almost always right were, on average, no more discerning than those who had been told they were mostly wrong.

In the second phase of the study, the deception was revealed. The students were told that the real point of the experiment was to gauge their responses to thinking they were right or wrong. (This, it turned out, was also a deception.) Finally, the students were asked to estimate how many suicide notes they had actually categorized correctly, and how many they thought an average student would get right. At this point, something curious happened. The students in the high-score group said that they thought they had, in fact, done quite well—significantly better than the average student—even though, as they’d just been told, they had zero grounds for believing this. Conversely, those who’d been assigned to the low-score group said that they thought they had done significantly worse than the average student—a conclusion that was equally unfounded.

“Once formed,” the researchers observed dryly, “impressions are remarkably perseverant.”

A few years later, a new set of Stanford students was recruited for a related study. The students were handed packets of information about a pair of firefighters, Frank K. and George H. Frank’s bio noted that, among other things, he had a baby daughter and he liked to scuba dive. George had a small son and played golf. The packets also included the men’s responses on what the researchers called the Risky-Conservative Choice Test. According to one version of the packet, Frank was a successful firefighter who, on the test, almost always went with the safest option. In the other version, Frank also chose the safest option, but he was a lousy firefighter who’d been put “on report” by his supervisors several times. Once again, midway through the study, the students were informed that they’d been misled, and that the information they’d received was entirely fictitious. The students were then asked to describe their own beliefs. What sort of attitude toward risk did they think a successful firefighter would have? The students who’d received the first packet thought that he would avoid it. The students in the second group thought he’d embrace it.

Even after the evidence “for their beliefs has been totally refuted, people fail to make appropriate revisions in those beliefs,” the researchers noted. In this case, the failure was “particularly impressive,” since two data points would never have been enough information to generalize from.

The Stanford studies became famous. Coming from a group of academics in the nineteen-seventies, the contention that people can’t think straight was shocking. It isn’t any longer. Thousands of subsequent experiments have confirmed (and elaborated on) this finding. As everyone who’s followed the research—or even occasionally picked up a copy of Psychology Today—knows, any graduate student with a clipboard can demonstrate that reasonable-seeming people are often totally irrational. Rarely has this insight seemed more relevant than it does right now. Still, an essential puzzle remains: How did we come to be this way?

In a new book, “The Enigma of Reason” (Harvard), the cognitive scientists Hugo Mercier and Dan Sperber take a stab at answering this question. Mercier, who works at a French research institute in Lyon, and Sperber, now based at the Central European University, in Budapest, point out that reason is an evolved trait, like bipedalism or three-color vision. It emerged on the savannas of Africa, and has to be understood in that context.

Stripped of a lot of what might be called cognitive-science-ese, Mercier and Sperber’s argument runs, more or less, as follows: Humans’ biggest advantage over other species is our ability to coöperate. Coöperation is difficult to establish and almost as difficult to sustain. For any individual, freeloading is always the best course of action. Reason developed not to enable us to solve abstract, logical problems or even to help us draw conclusions from unfamiliar data; rather, it developed to resolve the problems posed by living in collaborative groups.

“Reason is an adaptation to the hypersocial niche humans have evolved for themselves,” Mercier and Sperber write. Habits of mind that seem weird or goofy or just plain dumb from an “intellectualist” point of view prove shrewd when seen from a social “interactionist” perspective.

Consider what’s become known as “confirmation bias,” the tendency people have to embrace information that supports their beliefs and reject information that contradicts them. Of the many forms of faulty thinking that have been identified, confirmation bias is among the best catalogued; it’s the subject of entire textbooks’ worth of experiments. One of the most famous of these was conducted, again, at Stanford. For this experiment, researchers rounded up a group of students who had opposing opinions about capital punishment. Half the students were in favor of it and thought that it deterred crime; the other half were against it and thought that it had no effect on crime.

The students were asked to respond to two studies. One provided data in support of the deterrence argument, and the other provided data that called it into question. Both studies—you guessed it—were made up, and had been designed to present what were, objectively speaking, equally compelling statistics. The students who had originally supported capital punishment rated the pro-deterrence data highly credible and the anti-deterrence data unconvincing; the students who’d originally opposed capital punishment did the reverse. At the end of the experiment, the students were asked once again about their views. Those who’d started out pro-capital punishment were now even more in favor of it; those who’d opposed it were even more hostile.

If reason is designed to generate sound judgments, then it’s hard to conceive of a more serious design flaw than confirmation bias. Imagine, Mercier and Sperber suggest, a mouse that thinks the way we do. Such a mouse, “bent on confirming its belief that there are no cats around,” would soon be dinner. To the extent that confirmation bias leads people to dismiss evidence of new or underappreciated threats—the human equivalent of the cat around the corner—it’s a trait that should have been selected against. The fact that both we and it survive, Mercier and Sperber argue, proves that it must have some adaptive function, and that function, they maintain, is related to our “hypersociability.”

Mercier and Sperber prefer the term “myside bias.” Humans, they point out, aren’t randomly credulous. Presented with someone else’s argument, we’re quite adept at spotting the weaknesses. Almost invariably, the positions we’re blind about are our own.

A recent experiment performed by Mercier and some European colleagues neatly demonstrates this asymmetry. Participants were asked to answer a series of simple reasoning problems. They were then asked to explain their responses, and were given a chance to modify them if they identified mistakes. The majority were satisfied with their original choices; fewer than fifteen per cent changed their minds in step two.

In step three, participants were shown one of the same problems, along with their answer and the answer of another participant, who’d come to a different conclusion. Once again, they were given the chance to change their responses. But a trick had been played: the answers presented to them as someone else’s were actually their own, and vice versa. About half the participants realized what was going on. Among the other half, suddenly people became a lot more critical. Nearly sixty per cent now rejected the responses that they’d earlier been satisfied with.

Cartoon“Thanks again for coming—I usually find these office parties rather awkward.”

This lopsidedness, according to Mercier and Sperber, reflects the task that reason evolved to perform, which is to prevent us from getting screwed by the other members of our group. Living in small bands of hunter-gatherers, our ancestors were primarily concerned with their social standing, and with making sure that they weren’t the ones risking their lives on the hunt while others loafed around in the cave. There was little advantage in reasoning clearly, while much was to be gained from winning arguments.

Among the many, many issues our forebears didn’t worry about were the deterrent effects of capital punishment and the ideal attributes of a firefighter. Nor did they have to contend with fabricated studies, or fake news, or Twitter. It’s no wonder, then, that today reason often seems to fail us. As Mercier and Sperber write, “This is one of many cases in which the environment changed too quickly for natural selection to catch up.”

Steven Sloman, a professor at Brown, and Philip Fernbach, a professor at the University of Colorado, are also cognitive scientists. They, too, believe sociability is the key to how the human mind functions or, perhaps more pertinently, malfunctions. They begin their book, “The Knowledge Illusion: Why We Never Think Alone” (Riverhead), with a look at toilets.

Virtually everyone in the United States, and indeed throughout the developed world, is familiar with toilets. A typical flush toilet has a ceramic bowl filled with water. When the handle is depressed, or the button pushed, the water—and everything that’s been deposited in it—gets sucked into a pipe and from there into the sewage system. But how does this actually happen?

In a study conducted at Yale, graduate students were asked to rate their understanding of everyday devices, including toilets, zippers, and cylinder locks. They were then asked to write detailed, step-by-step explanations of how the devices work, and to rate their understanding again. Apparently, the effort revealed to the students their own ignorance, because their self-assessments dropped. (Toilets, it turns out, are more complicated than they appear.)

Sloman and Fernbach see this effect, which they call the “illusion of explanatory depth,” just about everywhere. People believe that they know way more than they actually do. What allows us to persist in this belief is other people. In the case of my toilet, someone else designed it so that I can operate it easily. This is something humans are very good at. We’ve been relying on one another’s expertise ever since we figured out how to hunt together, which was probably a key development in our evolutionary history. So well do we collaborate, Sloman and Fernbach argue, that we can hardly tell where our own understanding ends and others’ begins.

“One implication of the naturalness with which we divide cognitive labor,” they write, is that there’s “no sharp boundary between one person’s ideas and knowledge” and “those of other members” of the group.

This borderlessness, or, if you prefer, confusion, is also crucial to what we consider progress. As people invented new tools for new ways of living, they simultaneously created new realms of ignorance; if everyone had insisted on, say, mastering the principles of metalworking before picking up a knife, the Bronze Age wouldn’t have amounted to much. When it comes to new technologies, incomplete understanding is empowering.

Where it gets us into trouble, according to Sloman and Fernbach, is in the political domain. It’s one thing for me to flush a toilet without knowing how it operates, and another for me to favor (or oppose) an immigration ban without knowing what I’m talking about. Sloman and Fernbach cite a survey conducted in 2014, not long after Russia annexed the Ukrainian territory of Crimea. Respondents were asked how they thought the U.S. should react, and also whether they could identify Ukraine on a map. The farther off base they were about the geography, the more likely they were to favor military intervention. (Respondents were so unsure of Ukraine’s location that the median guess was wrong by eighteen hundred miles, roughly the distance from Kiev to Madrid.)

Surveys on many other issues have yielded similarly dismaying results. “As a rule, strong feelings about issues do not emerge from deep understanding,” Sloman and Fernbach write. And here our dependence on other minds reinforces the problem. If your position on, say, the Affordable Care Act is baseless and I rely on it, then my opinion is also baseless. When I talk to Tom and he decides he agrees with me, his opinion is also baseless, but now that the three of us concur we feel that much more smug about our views. If we all now dismiss as unconvincing any information that contradicts our opinion, you get, well, the Trump Administration.

“This is how a community of knowledge can become dangerous,” Sloman and Fernbach observe. The two have performed their own version of the toilet experiment, substituting public policy for household gadgets. In a study conducted in 2012, they asked people for their stance on questions like: Should there be a single-payer health-care system? Or merit-based pay for teachers? Participants were asked to rate their positions depending on how strongly they agreed or disagreed with the proposals. Next, they were instructed to explain, in as much detail as they could, the impacts of implementing each one. Most people at this point ran into trouble. Asked once again to rate their views, they ratcheted down the intensity, so that they either agreed or disagreed less vehemently.

Sloman and Fernbach see in this result a little candle for a dark world. If we—or our friends or the pundits on CNN—spent less time pontificating and more trying to work through the implications of policy proposals, we’d realize how clueless we are and moderate our views. This, they write, “may be the only form of thinking that will shatter the illusion of explanatory depth and change people’s attitudes.”

One way to look at science is as a system that corrects for people’s natural inclinations. In a well-run laboratory, there’s no room for myside bias; the results have to be reproducible in other laboratories, by researchers who have no motive to confirm them. And this, it could be argued, is why the system has proved so successful. At any given moment, a field may be dominated by squabbles, but, in the end, the methodology prevails. Science moves forward, even as we remain stuck in place.

In “Denying to the Grave: Why We Ignore the Facts That Will Save Us” (Oxford), Jack Gorman, a psychiatrist, and his daughter, Sara Gorman, a public-health specialist, probe the gap between what science tells us and what we tell ourselves. Their concern is with those persistent beliefs which are not just demonstrably false but also potentially deadly, like the conviction that vaccines are hazardous. Of course, what’s hazardous is not being vaccinated; that’s why vaccines were created in the first place. “Immunization is one of the triumphs of modern medicine,” the Gormans note. But no matter how many scientific studies conclude that vaccines are safe, and that there’s no link between immunizations and autism, anti-vaxxers remain unmoved. (They can now count on their side—sort of—Donald Trump, who has said that, although he and his wife had their son, Barron, vaccinated, they refused to do so on the timetable recommended by pediatricians.)

The Gormans, too, argue that ways of thinking that now seem self-destructive must at some point have been adaptive. And they, too, dedicate many pages to confirmation bias, which, they claim, has a physiological component. They cite research suggesting that people experience genuine pleasure—a rush of dopamine—when processing information that supports their beliefs. “It feels good to ‘stick to our guns’ even if we are wrong,” they observe.

The Gormans don’t just want to catalogue the ways we go wrong; they want to correct for them. There must be some way, they maintain, to convince people that vaccines are good for kids, and handguns are dangerous. (Another widespread but statistically insupportable belief they’d like to discredit is that owning a gun makes you safer.) But here they encounter the very problems they have enumerated. Providing people with accurate information doesn’t seem to help; they simply discount it. Appealing to their emotions may work better, but doing so is obviously antithetical to the goal of promoting sound science. “The challenge that remains,” they write toward the end of their book, “is to figure out how to address the tendencies that lead to false scientific belief.”

“The Enigma of Reason,” “The Knowledge Illusion,” and “Denying to the Grave” were all written before the November election. And yet they anticipate Kellyanne Conway and the rise of “alternative facts.” These days, it can feel as if the entire country has been given over to a vast psychological experiment being run either by no one or by Steve Bannon. Rational agents would be able to think their way to a solution. But, on this matter, the literature is not reassuring. 

Source: Why Facts Don’t Change Our Minds | The New Yorker

Syndicated copies to:

An Open Letter to The Uber Board and Investors

An Open Letter to The Uber Board and Investors by Mitch & Freada Kapor (NewCo Shift)
By now a staggering number of people recognize the name of Susan Fowler and have read some account of her experiences of sexism, sexual…
Feb. 23rd, 2017 By now a staggering number of people recognize the name of Susan Fowler and have read some account of her experiences of sexism, sexual harassment and horrendous management at Uber. So what explains the silence of Uber’s investors? Continue reading “An Open Letter to The Uber Board and Investors”
Syndicated copies to:

Chief digital officer steps down from White House job over background check | POLITICO

Chief digital officer steps down from White House job over background check by Tara Palmeri and Daniel Lippman (POLITICO)
The background check must be completed by White House staffers for positions that cover national security.
161212_white_house_getty_1160.jpg
Getty

02/22/17 02:16 PM EST

White House Chief Digital Officer Gerrit Lansing was among the six staffers who were dismissed from the White House last week after being unable to pass an FBI background check, according to sources.

A source close to Lansing said the issue with the background check was over investments.

Lansing previously led the digital department for the Republican National Committee.

The background check, security questionnaire SF86, must be completed by White House staffers for positions that cover national security.

President Donald Trump’s director of scheduling, Caroline Wiles, was also among the six staffers who did not pass the intensive FBI screening. She is the daughter of Susan Wiles, Trump’s Florida campaign director. Caroline Wiles resigned Friday before the background check was completed.

She was appointed deputy assistant secretary before the inauguration in January. Two sources close to Wiles said she will get another job in the Treasury Department.

Lansing left Feb. 9; his official file says he left on his accord.

The intensive background check includes questions on the applicant’s credit score, substance use and other personal subjects.

Source: Chief digital officer steps down from White House job over background check | POLITICO

Syndicated copies to:

👓 Physicists Uncover Geometric ‘Theory Space’ | Quanta Magazine

Physicists Uncover Geometric ‘Theory Space’ by Natalie Wolchover (Quanta Magazine)
A decades-old method called the “bootstrap” is enabling new discoveries about the geometry underlying all quantum theories.

In the 1960s, the charismatic physicist Geoffrey Chew espoused a radical vision of the universe, and with it, a new way of doing physics. Theorists of the era were struggling to find order in an unruly zoo of newfound particles. They wanted to know which ones were the fundamental building blocks of nature and which were composites. But Chew, a professor at the University of California, Berkeley, argued against such a distinction. “Nature is as it is because this is the only possible nature consistent with itself,” he wrote at the time. He believed he could deduce nature’s laws solely from the demand that they be self-consistent. Continue reading “👓 Physicists Uncover Geometric ‘Theory Space’ | Quanta Magazine”

Syndicated copies to:

Croissants by Vincent Talleu

Croissants by Vincent Talleu (YouTube)

Jeremy Cherfas is right, I think the majority of the secret is the tools. I am quite jealous of that massive dough roller, but I don’t think that a typical little home pasta machine would be quite as easy to use as Jeremy might hope.

My other favorite was the magic croissant cutter. I’ll have to look for one of those the next time I’m at a restaurant supply house. I imagine they’re pretty rare. It reminded me a little bit of the old school hand push lawn mowers.

The quick camera pan down at 5:34 with the CCR musical overlay was a lovely touch, but is a painful reminder of the fact that this type of mass manufacture is overkill for the home chef who may want as many as a dozen at a time (remember, pastries start their inevitable death the minute they’re done cooking). Though I do have to say watching this makes me want to open up a bakery, but which days is that not a thought I have?

The nice part about having this much dough was seeing some of the myriad of creative things one could do other than just croissants. Now, off to find a nice oranais.

Elon Musk Is Really Boring | Bloomberg

Elon Musk Is Really Boring by Max Chafkin (Bloomberg.com)
The billionaire visionary is digging in on a tunnel project to skirt gridlock, but there’s a hole in his Trump-era business bet.

Continue reading “Elon Musk Is Really Boring | Bloomberg”

Syndicated copies to:

Kenneth Arrow, Nobel-Winning Economist Whose Influence Spanned Decades, Dies at 95 | The New York Times

Kenneth Arrow, Nobel-Winning Economist Whose Influence Spanned Decades, Dies at 95 by Michael M. Weinstein (New York Times)(Duration: PT13H40M58S)
Professor Arrow, one of the most brilliant minds in his field during the 20th century, became the youngest economist ever to earn a Nobel at the age of 51.

Kenneth J. Arrow, one of the most brilliant economic minds of the 20th century and, at 51, the youngest economist ever to win a Nobel, died on Tuesday at his home in Palo Alto, Calif. He was 95.

His son David confirmed the death.

Continue reading “Kenneth Arrow, Nobel-Winning Economist Whose Influence Spanned Decades, Dies at 95 | The New York Times”

Syndicated copies to:

👓 Encouraging individual sovereignty and a healthy commons by Aral Balkan

Encouraging individual sovereignty and a healthy commons by Aral Balkan (ar.al)
Mark Zuckerberg’s manifesto outlines his vision for a centralised global colony ruled by the Silicon Valley oligarchy. I say we must do the exact opposite and create a world with individual sovereignty and a healthy commons.

The verbiage here is a bit inflammatory and very radical sounding, but the overarching thesis is fairly sound. The people who are slowly, but surely building the IndieWeb give me a lot of hope that the unintended (by the people anyway) consequences that are unfolding can be relatively quickly remedied.

Marginalia

We are sharded beings; the sum total of our various aspects as contained within our biological beings as well as the myriad of technologies that we use to extend our biological abilities.

To some extent, this thesis could extend Cesar Hidalgo’s concept of the personbyte as in putting part of one’s self out onto the internet, one can, in some sense, contain more information than previously required.

Richard Dawkin’s concept of meme extends the idea a bit further in that an individual’s thoughts can infect others and spread with a variable contagion rate dependent on various variables.

I would suspect that though this does extend the idea of personbyte, there is still some limit to how large the size of a particular person’s sphere could expand.


While technological implants are certainly feasible, possible, and demonstrable, the main way in which we extend ourselves with technology today is not through implants but explants.


in a tiny number of hands.

or in a number of tiny hands, as the case can sometimes be.


The reason we find ourselves in this mess with ubiquitous surveillance, filter bubbles, and fake news (propaganda) is precisely due to the utter and complete destruction of the public sphere by an oligopoly of private infrastructure that poses as public space.

This is a whole new tragedy of the commons: people don’t know where the commons actually are anymore.

Syndicated copies to:

Buzzfeed implements the IndieWeb concept of backfeed to limit filter bubbles

The evolution of comments on articles takes a new journalistic turn

Outside Your Bubble

This past Wednesday, BuzzFeed rolled out a new feature on their website called “Outside your Bubble”. I think the concept is so well-described and so laudable from a journalistic perspective, that I’ll excerpt their editor-in-chief’s entire description of the feature below. In short, they’ll be featuring some of the commentary on their pieces by pulling it in from social media silos.

What is interesting is that this isn’t a new concept and even more intriguing, there’s some great off-the-shelf technology that helps people move towards doing this type of functionality already.

The IndieWeb and backfeed

For the past several years, there’s been a growing movement on the the internet known as the IndieWeb, a “people-focused alternative to the ‘corporate web’.” Their primary goal is for people to better control their online identities by owning their own domain and the content they put on it while also allowing them to be better connected.

As part of the movement, users can more easily post their content on their own site and syndicate it elsewhere (a process known by the acronym POSSE). Many of these social media sites allow for increased distribution, but they also have the side effect of cordoning off or siloing the conversation. As a result many IndieWeb proponents backfeed the comments, likes, and other interactions on their syndicated content back to their original post.

Backfeed is the process of pulling back interactions on your syndicated content back (AKA reverse syndicating) to your original posts.

This concept of backfeed is exactly what BuzzFeed is proposing, but with a more editorial slant meant to provide additional thought and analysis on their original piece. In some sense, from a journalistic perspective, it also seems like an evolutionary step towards making traditional comments have more value to the casual reader. Instead of a simple chronological list of comments which may or may not have any value, they’re also using the feature to surface the more valuable comments which appear on their pieces. In a crowded journalistic marketplace, which is often misguided by market metrics like numbers of clicks, I have a feeling that more discerning readers will want this type of surfaced value if it’s done well. And discerning readers can bring their own value to a content publisher.

I find it interesting that not only is BuzzFeed using the concept of backfeed like this, but in Ben Smith’s piece, he eschews the typical verbiage ascribed to social media sites, namely the common phrase “walled garden,” in lieu of the word silo, which is also the word adopted by the IndieWeb movement to describe a “centralized web site typically owned by a for-profit corporation that stakes some claim to content contributed to it and restricts access in some way (has walls).”

To some extent, it almost appears that the BuzzFeed piece parrots back portions of the Why IndieWeb? page on the IndieWeb wiki.

Helping You See Outside Your Bubble | BuzzFeed

A new feature on some of our most widely shared articles.

BuzzFeed News is launching an experiment this week called “Outside Your Bubble,” an attempt to give our audience a glimpse at what’s happening outside their own social media spaces.

The Outside Your Bubble feature will appear as a module at the bottom of some widely shared news articles and will pull in what people are saying about the piece on Twitter, Facebook, Reddit, the web, and other platforms. It’s a response to the reality that often the same story will have two or three distinct and siloed conversations taking place around it on social media, where people talk to the like-minded without even being aware of other perspectives on the same reporting.

Our goal is to give readers a sense of these conversations around an article, and to add a kind of transparency that has been lost in the rise of social-media-driven filter bubbles. We view it in part as a way to amplify the work of BuzzFeed News reporters, and to add for readers a sense of the context in which news lives now.

And if you think there’s a relevant viewpoint we’re missing, you can contact the curator at bubble@buzzfeed.com.

Source: Helping You See Outside Your Bubble | Ben Smith for BuzzFeed

Editorial Perspective and Diminishing Returns

The big caveat on this type of journalistic functionality is that it may become a game of diminishing returns. When a new story comes out, most of the current ecosystem is geared too heavily towards freshness: which story is newest? It would be far richer if there were better canonical ways of indicating which articles were the most thorough, accurate, timely and interesting instead of just focusing on which was simply the most recent. Google News, as an example, might feature a breaking story for several hours, but thereafter every Tom, Dick, and Harry outlet on the planet will have their version of the story–often just a poorer quality rehash of the original without any new content–which somehow becomes the top of the heap because it’s the newest in the batch. Aram Zucker-Scharff mentioned this type of issue a few days ago in a tweetstorm which I touched upon last week.

Worse, for the feature to work well, it relies on the continuing compilation of responses, and the editorial effort required seems somewhat wasted in doing this as, over time, the audience for the article slowly diminishes. Thus for the largest portion of the audience there will be no commentary, all the while ever-dwindling incoming audiences get to see the richer content. This is just the opposite of the aphorism “the early bird gets the worm.” Even if the outlet compiled responses on a story from social media as they were writing in real time, it becomes a huge effort to stay current and capture eyeballs at scale. Hopefully the two effects will balance each other out creating an overall increase of value for both the publisher and the audience to have a more profound effect on the overall journalism ecosystem.

Personally and from a user experience perspective, I’d like to have the ability to subscribe to an article I read and enjoyed so that I can come back to it at a prescribed later date to see what the further thoughts on it were. As things stand, it’s painfully difficult and time consuming as a reader to attempt to engage on interesting pieces at a deeper level. Publications that can do this type of coverage and/or provide further analysis on ongoing topics will also have a potential edge over me-too publications that are simply rehashing the same exact stories on a regular basis. Outlets could also leverage this type user interface and other readers’ similar desire to increase their relationship with their readers by providing this value that others won’t or can’t.

Want more on “The IndieWeb and Journalism”?
See: Some thoughts about how journalists could improve their online presences with IndieWeb principles along with a mini-case study of a site that is employing some of these ideas.

In some sense, some of this journalistic workflow reminds me how much I miss Slate.com’s Today’s Papers feature in which someone read through the early edition copies of 4-5 major newspapers and did a quick synopsis of the day’s headlines and then analyzed the coverage of each to show how the stories differed, who got the real scoop, and at times declare a “winner” in coverage so that readers could then focus on reading that particular piece from the particular outlet.

Backfeed in action

What do you think about this idea? Will it change journalism and how readers consume it?

As always, you can feel free to comment on this story directly below, but you can also go to most of the syndicated versions of this post indicated below, and reply to or comment on them there. Your responses via Twitter, Facebook, and Google+ will be backfed via Brid.gy to this post and appear as comments below, so the entire audience will be able to see the otherwise dis-aggregated conversation compiled into one place.

If you prefer to own the content of your own comment or are worried your voice could be “moderated out of existence” (an experience I’ve felt the sting of in the past), feel free to post your response on your own website or blog, include a permalink to this article in your response, put the URL of your commentary into the box labeled “URL/Permalink of your Article”, and then click the “Ping Me” button. My site will then grab your response and add it to the comment stream with all the others.

Backfeed on!

H/T to Ryan Barrett for pointing out the BuzzFeed article.

Syndicated copies to:

JUMP Math, a teaching method that’s proving there’s no such thing as a bad math student | Quartz

Continue reading “JUMP Math, a teaching method that’s proving there’s no such thing as a bad math student | Quartz”

Income inequality linked to export “complexity” | MIT News

Income inequality linked to export “complexity” by Larry Hardesty (MIT News)
The mix of products that countries export is a good predictor of income distribution, study finds.

Continue reading “Income inequality linked to export “complexity” | MIT News”

Trump’s F-35 Calls Came With a Surprise: Rival CEO Was Listening – Bloomberg

Trump's F-35 Calls Came With a Surprise: Rival CEO Was Listening by Anthony Capaccio (Bloomberg.com)
Days before taking office, President-elect Donald Trump made two surprise calls to the Air Force general managing the Pentagon’s largest weapons program, the Lockheed Martin Corp. F-35 jet.

Continue reading “Trump’s F-35 Calls Came With a Surprise: Rival CEO Was Listening – Bloomberg”

Syndicated copies to:

Kellyanne Conway Sparks Media Debate About Interviewing Trump Advisers | Fortune.com

News Outlets Wrestle With Whether to Stop Interviewing Trump Advisers by Mathew Ingram (Fortune)
Some news programs have said they will no longer interview Kellyanne Conway because she isn't credible.

Continue reading “Kellyanne Conway Sparks Media Debate About Interviewing Trump Advisers | Fortune.com”

Pulling the plug on @tumblr, and why is @feedly so hard to use?

Pulling the plug on @tumblr, and why is @feedly so hard to use? by David Mead (davidjohnmead.com)(Duration: P)
I’ve now unfollowed everyone on Tumblr. It’s been turning into a dust bowl for me, people I followed haven’t been posting in years. Since the ads made the app annoying for me to u…
Syndicated copies to:

Ownership vs. Ownership

Ownership vs. Ownership by Matigo (Matigo dot See, eh?)
A Snap is a universal Linux package that works on (just about) any distribution or device. Snaps are faster to install, easier to create, safer to run, and they update automatically and transactionally so the software is always fresh and never broken. What this means for a normal person is that a tiny computer the size of a Starbucks coffee could be shipped to them and run on their home network. This would then interface with another server they have running in "the cloud". Rather than SSH into a Linux machine and install a bunch of disparate software packages, fiddle with configuration settings, and rage at Apache misconfigurations, a person would instead type something like the following into the public web server: sudo snap install 10centuries

For those in the IndieWeb who want to take “own your data” to the highest level, 10centuries sounds like an interesting project.

Syndicated copies to: