👓 Why Facts Don’t Change Our Minds | The New Yorker

Why Facts Don’t Change Our Minds by Elizabeth Kolbert (The New Yorker)
New discoveries about the human mind show the limitations of reason.
The vaunted human capacity for reason may have more to do with winning arguments than with thinking straight.
The vaunted human capacity for reason may have more to do with winning arguments than with thinking straight. Credit Illustration by Gérard DuBois

In 1975, researchers at Stanford invited a group of undergraduates to take part in a study about suicide. They were presented with pairs of suicide notes. In each pair, one note had been composed by a random individual, the other by a person who had subsequently taken his own life. The students were then asked to distinguish between the genuine notes and the fake ones.

Some students discovered that they had a genius for the task. Out of twenty-five pairs of notes, they correctly identified the real one twenty-four times. Others discovered that they were hopeless. They identified the real note in only ten instances.

As is often the case with psychological studies, the whole setup was a put-on. Though half the notes were indeed genuine—they’d been obtained from the Los Angeles County coroner’s office—the scores were fictitious. The students who’d been told they were almost always right were, on average, no more discerning than those who had been told they were mostly wrong.

In the second phase of the study, the deception was revealed. The students were told that the real point of the experiment was to gauge their responses to thinking they were right or wrong. (This, it turned out, was also a deception.) Finally, the students were asked to estimate how many suicide notes they had actually categorized correctly, and how many they thought an average student would get right. At this point, something curious happened. The students in the high-score group said that they thought they had, in fact, done quite well—significantly better than the average student—even though, as they’d just been told, they had zero grounds for believing this. Conversely, those who’d been assigned to the low-score group said that they thought they had done significantly worse than the average student—a conclusion that was equally unfounded.

“Once formed,” the researchers observed dryly, “impressions are remarkably perseverant.”

A few years later, a new set of Stanford students was recruited for a related study. The students were handed packets of information about a pair of firefighters, Frank K. and George H. Frank’s bio noted that, among other things, he had a baby daughter and he liked to scuba dive. George had a small son and played golf. The packets also included the men’s responses on what the researchers called the Risky-Conservative Choice Test. According to one version of the packet, Frank was a successful firefighter who, on the test, almost always went with the safest option. In the other version, Frank also chose the safest option, but he was a lousy firefighter who’d been put “on report” by his supervisors several times. Once again, midway through the study, the students were informed that they’d been misled, and that the information they’d received was entirely fictitious. The students were then asked to describe their own beliefs. What sort of attitude toward risk did they think a successful firefighter would have? The students who’d received the first packet thought that he would avoid it. The students in the second group thought he’d embrace it.

Even after the evidence “for their beliefs has been totally refuted, people fail to make appropriate revisions in those beliefs,” the researchers noted. In this case, the failure was “particularly impressive,” since two data points would never have been enough information to generalize from.

The Stanford studies became famous. Coming from a group of academics in the nineteen-seventies, the contention that people can’t think straight was shocking. It isn’t any longer. Thousands of subsequent experiments have confirmed (and elaborated on) this finding. As everyone who’s followed the research—or even occasionally picked up a copy of Psychology Today—knows, any graduate student with a clipboard can demonstrate that reasonable-seeming people are often totally irrational. Rarely has this insight seemed more relevant than it does right now. Still, an essential puzzle remains: How did we come to be this way?

In a new book, “The Enigma of Reason” (Harvard), the cognitive scientists Hugo Mercier and Dan Sperber take a stab at answering this question. Mercier, who works at a French research institute in Lyon, and Sperber, now based at the Central European University, in Budapest, point out that reason is an evolved trait, like bipedalism or three-color vision. It emerged on the savannas of Africa, and has to be understood in that context.

Stripped of a lot of what might be called cognitive-science-ese, Mercier and Sperber’s argument runs, more or less, as follows: Humans’ biggest advantage over other species is our ability to coöperate. Coöperation is difficult to establish and almost as difficult to sustain. For any individual, freeloading is always the best course of action. Reason developed not to enable us to solve abstract, logical problems or even to help us draw conclusions from unfamiliar data; rather, it developed to resolve the problems posed by living in collaborative groups.

“Reason is an adaptation to the hypersocial niche humans have evolved for themselves,” Mercier and Sperber write. Habits of mind that seem weird or goofy or just plain dumb from an “intellectualist” point of view prove shrewd when seen from a social “interactionist” perspective.

Consider what’s become known as “confirmation bias,” the tendency people have to embrace information that supports their beliefs and reject information that contradicts them. Of the many forms of faulty thinking that have been identified, confirmation bias is among the best catalogued; it’s the subject of entire textbooks’ worth of experiments. One of the most famous of these was conducted, again, at Stanford. For this experiment, researchers rounded up a group of students who had opposing opinions about capital punishment. Half the students were in favor of it and thought that it deterred crime; the other half were against it and thought that it had no effect on crime.

The students were asked to respond to two studies. One provided data in support of the deterrence argument, and the other provided data that called it into question. Both studies—you guessed it—were made up, and had been designed to present what were, objectively speaking, equally compelling statistics. The students who had originally supported capital punishment rated the pro-deterrence data highly credible and the anti-deterrence data unconvincing; the students who’d originally opposed capital punishment did the reverse. At the end of the experiment, the students were asked once again about their views. Those who’d started out pro-capital punishment were now even more in favor of it; those who’d opposed it were even more hostile.

If reason is designed to generate sound judgments, then it’s hard to conceive of a more serious design flaw than confirmation bias. Imagine, Mercier and Sperber suggest, a mouse that thinks the way we do. Such a mouse, “bent on confirming its belief that there are no cats around,” would soon be dinner. To the extent that confirmation bias leads people to dismiss evidence of new or underappreciated threats—the human equivalent of the cat around the corner—it’s a trait that should have been selected against. The fact that both we and it survive, Mercier and Sperber argue, proves that it must have some adaptive function, and that function, they maintain, is related to our “hypersociability.”

Mercier and Sperber prefer the term “myside bias.” Humans, they point out, aren’t randomly credulous. Presented with someone else’s argument, we’re quite adept at spotting the weaknesses. Almost invariably, the positions we’re blind about are our own.

A recent experiment performed by Mercier and some European colleagues neatly demonstrates this asymmetry. Participants were asked to answer a series of simple reasoning problems. They were then asked to explain their responses, and were given a chance to modify them if they identified mistakes. The majority were satisfied with their original choices; fewer than fifteen per cent changed their minds in step two.

In step three, participants were shown one of the same problems, along with their answer and the answer of another participant, who’d come to a different conclusion. Once again, they were given the chance to change their responses. But a trick had been played: the answers presented to them as someone else’s were actually their own, and vice versa. About half the participants realized what was going on. Among the other half, suddenly people became a lot more critical. Nearly sixty per cent now rejected the responses that they’d earlier been satisfied with.

Cartoon“Thanks again for coming—I usually find these office parties rather awkward.”

This lopsidedness, according to Mercier and Sperber, reflects the task that reason evolved to perform, which is to prevent us from getting screwed by the other members of our group. Living in small bands of hunter-gatherers, our ancestors were primarily concerned with their social standing, and with making sure that they weren’t the ones risking their lives on the hunt while others loafed around in the cave. There was little advantage in reasoning clearly, while much was to be gained from winning arguments.

Among the many, many issues our forebears didn’t worry about were the deterrent effects of capital punishment and the ideal attributes of a firefighter. Nor did they have to contend with fabricated studies, or fake news, or Twitter. It’s no wonder, then, that today reason often seems to fail us. As Mercier and Sperber write, “This is one of many cases in which the environment changed too quickly for natural selection to catch up.”

Steven Sloman, a professor at Brown, and Philip Fernbach, a professor at the University of Colorado, are also cognitive scientists. They, too, believe sociability is the key to how the human mind functions or, perhaps more pertinently, malfunctions. They begin their book, “The Knowledge Illusion: Why We Never Think Alone” (Riverhead), with a look at toilets.

Virtually everyone in the United States, and indeed throughout the developed world, is familiar with toilets. A typical flush toilet has a ceramic bowl filled with water. When the handle is depressed, or the button pushed, the water—and everything that’s been deposited in it—gets sucked into a pipe and from there into the sewage system. But how does this actually happen?

In a study conducted at Yale, graduate students were asked to rate their understanding of everyday devices, including toilets, zippers, and cylinder locks. They were then asked to write detailed, step-by-step explanations of how the devices work, and to rate their understanding again. Apparently, the effort revealed to the students their own ignorance, because their self-assessments dropped. (Toilets, it turns out, are more complicated than they appear.)

Sloman and Fernbach see this effect, which they call the “illusion of explanatory depth,” just about everywhere. People believe that they know way more than they actually do. What allows us to persist in this belief is other people. In the case of my toilet, someone else designed it so that I can operate it easily. This is something humans are very good at. We’ve been relying on one another’s expertise ever since we figured out how to hunt together, which was probably a key development in our evolutionary history. So well do we collaborate, Sloman and Fernbach argue, that we can hardly tell where our own understanding ends and others’ begins.

“One implication of the naturalness with which we divide cognitive labor,” they write, is that there’s “no sharp boundary between one person’s ideas and knowledge” and “those of other members” of the group.

This borderlessness, or, if you prefer, confusion, is also crucial to what we consider progress. As people invented new tools for new ways of living, they simultaneously created new realms of ignorance; if everyone had insisted on, say, mastering the principles of metalworking before picking up a knife, the Bronze Age wouldn’t have amounted to much. When it comes to new technologies, incomplete understanding is empowering.

Where it gets us into trouble, according to Sloman and Fernbach, is in the political domain. It’s one thing for me to flush a toilet without knowing how it operates, and another for me to favor (or oppose) an immigration ban without knowing what I’m talking about. Sloman and Fernbach cite a survey conducted in 2014, not long after Russia annexed the Ukrainian territory of Crimea. Respondents were asked how they thought the U.S. should react, and also whether they could identify Ukraine on a map. The farther off base they were about the geography, the more likely they were to favor military intervention. (Respondents were so unsure of Ukraine’s location that the median guess was wrong by eighteen hundred miles, roughly the distance from Kiev to Madrid.)

Surveys on many other issues have yielded similarly dismaying results. “As a rule, strong feelings about issues do not emerge from deep understanding,” Sloman and Fernbach write. And here our dependence on other minds reinforces the problem. If your position on, say, the Affordable Care Act is baseless and I rely on it, then my opinion is also baseless. When I talk to Tom and he decides he agrees with me, his opinion is also baseless, but now that the three of us concur we feel that much more smug about our views. If we all now dismiss as unconvincing any information that contradicts our opinion, you get, well, the Trump Administration.

“This is how a community of knowledge can become dangerous,” Sloman and Fernbach observe. The two have performed their own version of the toilet experiment, substituting public policy for household gadgets. In a study conducted in 2012, they asked people for their stance on questions like: Should there be a single-payer health-care system? Or merit-based pay for teachers? Participants were asked to rate their positions depending on how strongly they agreed or disagreed with the proposals. Next, they were instructed to explain, in as much detail as they could, the impacts of implementing each one. Most people at this point ran into trouble. Asked once again to rate their views, they ratcheted down the intensity, so that they either agreed or disagreed less vehemently.

Sloman and Fernbach see in this result a little candle for a dark world. If we—or our friends or the pundits on CNN—spent less time pontificating and more trying to work through the implications of policy proposals, we’d realize how clueless we are and moderate our views. This, they write, “may be the only form of thinking that will shatter the illusion of explanatory depth and change people’s attitudes.”

One way to look at science is as a system that corrects for people’s natural inclinations. In a well-run laboratory, there’s no room for myside bias; the results have to be reproducible in other laboratories, by researchers who have no motive to confirm them. And this, it could be argued, is why the system has proved so successful. At any given moment, a field may be dominated by squabbles, but, in the end, the methodology prevails. Science moves forward, even as we remain stuck in place.

In “Denying to the Grave: Why We Ignore the Facts That Will Save Us” (Oxford), Jack Gorman, a psychiatrist, and his daughter, Sara Gorman, a public-health specialist, probe the gap between what science tells us and what we tell ourselves. Their concern is with those persistent beliefs which are not just demonstrably false but also potentially deadly, like the conviction that vaccines are hazardous. Of course, what’s hazardous is not being vaccinated; that’s why vaccines were created in the first place. “Immunization is one of the triumphs of modern medicine,” the Gormans note. But no matter how many scientific studies conclude that vaccines are safe, and that there’s no link between immunizations and autism, anti-vaxxers remain unmoved. (They can now count on their side—sort of—Donald Trump, who has said that, although he and his wife had their son, Barron, vaccinated, they refused to do so on the timetable recommended by pediatricians.)

The Gormans, too, argue that ways of thinking that now seem self-destructive must at some point have been adaptive. And they, too, dedicate many pages to confirmation bias, which, they claim, has a physiological component. They cite research suggesting that people experience genuine pleasure—a rush of dopamine—when processing information that supports their beliefs. “It feels good to ‘stick to our guns’ even if we are wrong,” they observe.

The Gormans don’t just want to catalogue the ways we go wrong; they want to correct for them. There must be some way, they maintain, to convince people that vaccines are good for kids, and handguns are dangerous. (Another widespread but statistically insupportable belief they’d like to discredit is that owning a gun makes you safer.) But here they encounter the very problems they have enumerated. Providing people with accurate information doesn’t seem to help; they simply discount it. Appealing to their emotions may work better, but doing so is obviously antithetical to the goal of promoting sound science. “The challenge that remains,” they write toward the end of their book, “is to figure out how to address the tendencies that lead to false scientific belief.”

“The Enigma of Reason,” “The Knowledge Illusion,” and “Denying to the Grave” were all written before the November election. And yet they anticipate Kellyanne Conway and the rise of “alternative facts.” These days, it can feel as if the entire country has been given over to a vast psychological experiment being run either by no one or by Steve Bannon. Rational agents would be able to think their way to a solution. But, on this matter, the literature is not reassuring. 

Source: Why Facts Don’t Change Our Minds | The New Yorker

Syndicated copies to:

Thanks to Trump, Scientists Are Going to Run for Office | The Atlantic

Thanks to Trump, Scientists Are Planning to Run for Office by Ed Yong (The Atlantic)
… and they’ve got help.

Continue reading “Thanks to Trump, Scientists Are Going to Run for Office | The Atlantic”

Syndicated copies to:

Science and technology: what happened in 2016 – Daniel Lemire’s blog

Science and technology: what happened in 2016 by Daniel Lemire (Daniel Lemire's blog)(Duration: P3DT4H22M43S)

This year, you are able to buy CRISPR-based gene editing toolkits for $150 on the Internet as well as autonomous drones, and you can ask your Amazon Echo to play your favorite music or give you a traffic report. You can buy a fully functional Android tablet for $40 on Amazon. If you have made it to a Walmart near you lately, you know that kids are going to receive dirt cheap remote-controlled flying drones for Christmas this year. Amazon now delivers packages by drone with its Prime Air service. There are 2.6 billion smartphones in the world.

So what else happened in 2016?

January:

Cancer rates have dropped by 23% since 1991. And, no, it is not explained away by the fact that fewer people smoke.

Though most insects are short-lived, we have learned that some ants never age.

February:

In the UK, scientists have been allowed to modify genetically human embryos.

As you age, your body accumulates “senescent cells”. These are non-functional cells that are harmless in small quantities but can cause trouble when they accumulate. Researchers have found that by removing them (using drugs called senolytics), they could extend the life of mice. A company funded by Amazon.com’s CEO Jeff Bezos, Unity Biotechnology, is working to bring this technology to human beings. The CEO of this new company has given a talk which you can watch on YouTube.

Google reports that it can determine the location of almost any picture with superhuman ability. Take a picture of your favorite landscape and Google will tell you where you are just by looking at the picture.

Researchers are attempting to regrow knee cartilage in human beings using stem cells.

March:

Google defeated the world champion at the board game of Go. With the defeat of Kasparov at the hands of IBM’s Deep Blue 20 years ago, there is no longer any board game where human beings are superior to computers.

Google supplemented its Google Photo service with advanced artificial intelligence. If you are looking for pictures of your dog, Google can find them for you, without anyone ever entering metadata.

April:

Europe has authorized a gene therapy to help cure children. Soon enough, we will routinely use genetic engineering to cure diseases.

We have entered the era of fully functional consumer-grade virtual-reality gear. Many of us have experienced high-quality virtual experiences for the first time in 2016. According to some estimates, over 3 million VR units were sold in 2016: 750k PlayStation VR, 261k Google DayDream, 2.3 million Gear VR, 450k HTC Vive and 355k Oculus Rift.

Dementia rates in human beings are falling. We do not know why.

May:

Foxconn, the company that makes the iPhone for Apple, has replaced 60,000 employees with robots.

July:

Pokemon Go is the first massively popular augmented-reality game.

China holds the first clinical trials of CRISPR-based anti-cancer therapies.

August:

Netflix has more subscribers than any other TV cable company.

There are ongoing clinical trials where blood plasma from young people is given to older people, in the hope of rejuvenating them. Blood plasma is abundant and routinely discarded in practice, so if it were to work, it might be quite practical. (Further research that appeared later this year allows us to be somewhat pessimistic regarding the results of these trials in the sense that rejuvenation might require the removal of aging agents rather than the addition of youthful factors.)

Singapore is the first city in the world to have self-driving taxis.

We know that some invertebrates, like lobsters, do not age (like we do). For example, we have found 140-year-old lobsters in the wild recently and they aren’t the oldest. The hydra has constant mortality throughout its life. There is definitively a lot of variation in how different species age. Lots of sea creatures like sharks are both long-lived and vertebrates. We don’t know how long sharks can live, but we found a shark that is at least 272 years old. This suggests that even vertebrates, and maybe mammals, could be effectively ageless.

September:

What would happen if you took stem cells and put them in a brain? Would they turn into functional neurons and integrate into your brain… potentially replacing lost neurons? Researchers have shown that it works in mice.

Japan had 153 centenarians in 1963. There are now over 60,000 centenarians in Japan. According to researchers (Christensen et al., 2009), a child born today in a developed country has a median life expectancy of over 100 years.

The iPhone 7 is just as fast as a 2013 MacBook Pro. That is, our best smartphones from 2016 can run some software just as fast as the best laptop from three years ago.

Software can now read mammograms more accurately than any doctor. Software is also faster and cheaper.

Google upgraded its “Google Translate” service using a new engine leveraging recent progress in “deep learning” algorithms. The quality of the translations has been much improved. What is really impressive is that it works at “Google scale” meaning that all of us can use it at the same time.

October:

The most popular game console of this generation, the PlayStation 4, released a well-reviewed virtual-reality headset.

Between 2011 and 2016, traditional TV viewing by 18-24-year-olds dropped by more than 9 hours per week.

Mammal heart muscles do not regenerate. So if your heart runs out of oxygen and is damaged, you may never recover the loss function. However, scientists have showed (in monkeys) that we could induce regeneration with stem cells.

Microsoft claims to be able to match human beings in conversational speech recognition.

Tesla, the car maker, has enabled self-driving on all its cars. Even General Motors is producing self-driving cars in Michigan. You should still check whether it is legal for you to drive without holding the wheel.

November:

Crazy researchers gave young human blood to old mice. The old mice were rejuvenated. Though we do not know yet what it means exactly, it is strong evidence that your age is inscribed in your blood and that by changing your blood, we can age or rejuvenate you, to a point.

On the same theme, the Conboy lab at Berkeley, with funding from Google (via Calico) has shown, using a fancy blood transfer technique, that there are “aging agents” in old mice blood. If you take these agents and put them into a young mouse, it is aged. Presumably, if we were to identify these agents, we could remove them with drugs or dialysis and rejuvenate old mice. In time, we could maybe rejuvenate human beings as well.

Facebook’s CEO, Mark Zuckerberg, said that it will be soon normal for human beings to live over 100 years. This is supported by the work of demographers such as James W. Vaupel.

December:

It was generally believed that women eventually ran out of eggs and became irreversibly sterile. Researchers have found that a cancer drug can be used to spur the generation of new eggs, thus potentially reversing age-related infertility. In the future, menopause could be reversed.

Bones grown in a lab were successfully transplanted in human beings.

We don’t know exactly how long seabirds could live. It is hard work to study long-lived animals and it takes a long time (obviously). Biologists have encountered unexpected problems such as the fact that the rings that they use to tag the birds wear out faster than the birds do. In 1956, Chandler Robbins tagged an albatross. The 66-year-old albatross laid an egg this year.

Old people (in their 30s and 40s) can have young babies. Evidently, biological cells have a way to reset their age. Ten years ago (in 2006), Shinya Yamanaka showed how to do just that in the lab by activating only four genes. So no matter how old you are, I can take some of your cells and “reset them” so that they are “young again”. So far, however, nobody has known how to apply this to multicellular organism. The New York Times reports that researchers from the Salk Institute did just that. (They have a video on YouTube.) By activating the four genes, they showed that they could rejuvenate human skin tissue in vitro, then they showed that they could extend the lifespan of mice.

Source: Science and technology: what happened in 2016 – Daniel Lemire’s blog

Syndicated copies to:

The Food Lab: Better Home Cooking Through Science by J. Kenji López

The Food Lab: Better Home Cooking Through Science by J. Kenji López-Alt (amazon.com)
The New York Times bestselling winner of the 2016 James Beard Award for General Cooking and the IACP Cookbook of the Year Award. A grand tour of the science of cooking explored through popular American dishes, illustrated in full color. Ever wondered how to pan-fry a steak with a charred crust and an interior that's perfectly medium-rare from edge to edge when you cut into it? How to make homemade mac 'n' cheese that is as satisfyingly gooey and velvety-smooth as the blue box stuff, but far tastier? How to roast a succulent, moist turkey (forget about brining!)―and use a foolproof method that works every time? As Serious Eats's culinary nerd-in-residence, J. Kenji López-Alt has pondered all these questions and more. In The Food Lab, Kenji focuses on the science behind beloved American dishes, delving into the interactions between heat, energy, and molecules that create great food. Kenji shows that often, conventional methods don’t work that well, and home cooks can achieve far better results using new―but simple―techniques. In hundreds of easy-to-make recipes with over 1,000 full-color images, you will find out how to make foolproof Hollandaise sauce in just two minutes, how to transform one simple tomato sauce into a half dozen dishes, how to make the crispiest, creamiest potato casserole ever conceived, and much more.

Weekly Recap: Interesting Articles 7/24-7/31 2016

Some of the interesting things I saw and read this week

Went on vacation or fell asleep at the internet wheel this week? Here’s some of the interesting stuff you missed.

Science & Math

Publishing

Indieweb, Internet, Identity, Blogging, Social Media

General

Syndicated copies to:

Some Thoughts on Academic Publishing and “Who’s downloading pirated papers? Everyone” from Science | AAAS

Who's downloading pirated papers? Everyone by John Bohannon (Science | AAAS)
An exclusive look at data from the controversial web site Sci-Hub reveals that the whole world, both poor and rich, is reading pirated research papers.

Sci Hub has been in the news quite a bit over the past half a year and the bookmarked article here gives some interesting statistics. I’ll preface some of the following editorial critique with the fact that I love John Bohannon’s work; I’m glad he’s spent the time to do the research he has. Most of the rest of the critique is aimed at the publishing industry itself.

From a journalistic standpoint, I find it disingenuous that the article didn’t actually hyperlink to Sci Hub. Neither did it link out (or provide a full quote) to Alicia Wise’s Twitter post(s) nor link to her rebuttal list of 20 ways to access their content freely or inexpensively. Of course both of these are editorial related, and perhaps the rebuttal was so flimsy as to be unworthy of a link from such an esteemed publication anyway.

Sadly, Elsevier’s list of 20 ways of free/inexpensive access doesn’t really provide any simple coverage for graduate students or researchers in poorer countries which are the likeliest group of people using Sci Hub, unless they’re going to fraudulently claim they’re part of a class which they’re not, and is this morally any better than the original theft method? It’s almost assuredly never used by patients, which seem to be covered under one of the options, as the option to do so is painfully undiscoverable past their typical $30/paper firewalls. Their patchwork hodgepodge of free access is so difficult to not only discern, but one must keep in mind that this is just one of dozens of publishers a researcher must navigate to find the one thing they’re looking for right now (not to mention the thousands of times they need to do this throughout a year, much less a career).

Consider this experiment, which could be a good follow up to the article: is it easier to find and download a paper by title/author/DOI via Sci Hub (a minute) versus through any of the other publishers’ platforms with a university subscription (several minutes) or without a subscription (an hour or more to days)? Just consider the time it would take to dig up every one of 30 references in an average journal article: maybe just a half an hour via Sci Hub versus the days and/or weeks it would take to jump through the multiple hoops to first discover, read about, and then gain access and then download them from the over 14 providers (and this presumes the others provide some type of “access” like Elsevier).

Those who lived through the Napster revolution in music will realize that the dead simplicity of their system is primarily what helped kill the music business compared to the ecosystem that exists now with easy access through the multiple streaming sites (Spotify, Pandora, etc.) or inexpensive paid options like (iTunes). If the publishing business doesn’t want to get completely killed, they’re going to need to create the iTunes of academia. I suspect they’ll have internal bean-counters watching the percentage of the total (now apparently 5%) and will probably only do something before it passes a much larger threshold, though I imagine that they’re really hoping that the number stays stable which signals that they’re not really concerned. They’re far more likely to continue to maintain their status quo practices.

Some of this ease-of-access argument is truly borne out by the statistics of open access papers which are downloaded by Sci Hub–it’s simply easier to both find and download them that way compared to traditional methods; there’s one simple pathway for both discovery and download. Surely the publishers, without colluding, could come up with a standardized method or protocol for finding and accessing their material cheaply and easily?

“Hart-Davidson obtained more than 100 years of biology papers the hard way—legally with the help of the publishers. ‘It took an entire year just to get permission,’ says Thomas Padilla, the MSU librarian who did the negotiating.” John Bohannon in Who’s downloading pirated papers? Everyone

Personally, I use use relatively advanced tools like LibX, which happens to be offered by my institution and which I feel isn’t very well known, and it still takes me longer to find and download a paper than it would via Sci Hub. God forbid if some enterprising hacker were to create a LibX community version for Sci Hub. Come to think of it, why haven’t any of the dozens of publishers built and supported simple tools like LibX which make their content easy to access? If we consider the analogy of academic papers to the introduction of machine guns in World War I, why should modern researchers still be using single-load rifles against an enemy that has access to nuclear weaponry?

My last thought here comes on the heels of the two tweets from Alicia Wise mentioned, but not shown in the article:

She mentions that the New York Times charges more than Elsevier does for a full subscription. This is tremendously disingenuous as Elsevier is but one of dozens of publishers for which one would have to subscribe to have access to the full panoply of material researchers are typically looking for. Further, Elsevier nor their competitors are making their material as easy to find and access as the New York Times does. Neither do they discount access to the point that they attempt to find the subscription point that their users find financially acceptable. Case in point: while I often read the New York Times, I rarely go over their monthly limit of articles to need any type of paid subscription. Solely because they made me an interesting offer to subscribe for 8 weeks for 99 cents, I took them up on it and renewed that deal for another subsequent 8 weeks. Not finding it worth the full $35/month price point I attempted to cancel. I had to cancel the subscription via phone, but why? The NYT customer rep made me no less than 5 different offers at ever decreasing price points–including the 99 cents for 8 weeks which I had been getting!!–to try to keep my subscription. Elsevier, nor any of their competitors has ever tried (much less so hard) to earn my business. (I’ll further posit that it’s because it’s easier to fleece at the institutional level with bulk negotiation, a model not too dissimilar to the textbook business pressuring professors on textbook adoption rather than trying to sell directly the end consumer–the student, which I’ve written about before.)

(Trigger alert: Apophasis to come) And none of this is to mention the quality control that is (or isn’t) put into the journals or papers themselves. Fortunately one need’t even go further than Bohannon’s other writings like Who’s Afraid of Peer Review? Then there are the hordes of articles on poor research design and misuse of statistical analysis and inability to repeat experiments. Not to give them any ideas, but lately it seems like Elsevier buying the Enquirer and charging $30 per article might not be a bad business decision. Maybe they just don’t want to play second-banana to TMZ?

Interestingly there’s a survey at the end of the article which indicates some additional sources of academic copyright infringement. I do have to wonder how the data for the survey will be used? There’s always the possibility that logged in users will be indicating they’re circumventing copyright and opening themselves up to litigation.

I also found the concept of using the massive data store as a means of applied corpus linguistics for science an entertaining proposition. This type of research could mean great things for science communication in general. I have heard of people attempting to do such meta-analysis to guide the purchase of potential intellectual property for patent trolling as well.

Finally, for those who haven’t done it (ever or recently), I’ll recommend that it’s certainly well worth their time and energy to attend one or more of the many 30-60 minute sessions most academic libraries offer at the beginning of their academic terms to train library users on research tools and methods. You’ll save yourself a huge amount of time.

Syndicated copies to:

Donald Forsdyke Indicates the Concept of Information in Biology Predates Claude Shannon

In the 1870s Ewald Hering in Prague and Samuel Butler in London laid the foundations. Butler's work was later taken up by Richard Semon in Munich, whose writings inspired the young Erwin Schrodinger in the early decades of the 20th century.

As it was published, I had read Kevin Hartnett’s article and interview with Christoph Adami The Information Theory of Life in Quanta Magazine. I recently revisited it and read through the commentary and stumbled upon an interesting quote relating to the history of information in biology:

Polymath Adami has ‘looked at so many fields of science’ and has correctly indicated the underlying importance of information theory, to which he has made important contributions. However, perhaps because the interview was concerned with the origin of life and was edited and condensed, many readers may get the impression that IT is only a few decades old. However, information ideas in biology can be traced back to at least 19th century sources. In the 1870s Ewald Hering in Prague and Samuel Butler in London laid the foundations. Butler’s work was later taken up by Richard Semon in Munich, whose writings inspired the young Erwin Schrodinger in the early decades of the 20th century. The emergence of his text – “What is Life” – from Dublin in the 1940s, inspired those who gave us DNA structure and the associated information concepts in “the classic period” of molecular biology. For more please see: Forsdyke, D. R. (2015) History of Psychiatry 26 (3), 270-287.

Donald Forsdyke, bioinformatician and theoretical biologist
in response to The Information Theory of Life in Quanta Magazine on

These two historical references predate Claude Shannon’s mathematical formalization of information in A Mathematical Theory of Communication (The Bell System Technical Journal, 1948) and even Erwin Schrödinger‘s lecture (1943) and subsequent book What is Life (1944).

For those interested in reading more on this historical tidbit, I’ve dug up a copy of the primary Forsdyke reference which first appeared on arXiv (prior to its ultimate publication in History of Psychiatry [.pdf]):

🔖 [1406.1391] ‘A Vehicle of Symbols and Nothing More.’ George Romanes, Theory of Mind, Information, and Samuel Butler by Donald R. Forsdyke  [1]
Submitted on 4 Jun 2014 (v1), last revised 13 Nov 2014 (this version, v2)

Abstract: Today’s ‘theory of mind’ (ToM) concept is rooted in the distinction of nineteenth century philosopher William Clifford between ‘objects’ that can be directly perceived, and ‘ejects,’ such as the mind of another person, which are inferred from one’s subjective knowledge of one’s own mind. A founder, with Charles Darwin, of the discipline of comparative psychology, George Romanes considered the minds of animals as ejects, an idea that could be generalized to ‘society as eject’ and, ultimately, ‘the world as an eject’ – mind in the universe. Yet, Romanes and Clifford only vaguely connected mind with the abstraction we call ‘information,’ which needs ‘a vehicle of symbols’ – a material transporting medium. However, Samuel Butler was able to address, in informational terms depleted of theological trappings, both organic evolution and mind in the universe. This view harmonizes with insights arising from modern DNA research, the relative immortality of ‘selfish’ genes, and some startling recent developments in brain research.

Comments: Accepted for publication in History of Psychiatry. 31 pages including 3 footnotes. Based on a lecture given at Santa Clara University, February 28th 2014, at a Bannan Institute Symposium on ‘Science and Seeking: Rethinking the God Question in the Lab, Cosmos, and Classroom.’

The original arXiv article also referenced two lectures which are appended below:

[Original Draft of this was written on December 14, 2015.]

References

[1]
D. Forsdyke R., “‘A vehicle of symbols and nothing more’. George Romanes, theory of mind, information, and Samuel Butler,” History of Psychiatry, vol. 26, no. 3, Aug. 2015 [Online]. Available: http://journals.sagepub.com/doi/abs/10.1177/0957154X14562755
Syndicated copies to:

No, It’s Not Your Opinion. You’re Just Wrong. | Houston Press

Before you crouch behind your Shield of Opinion you need to ask yourself two questions: 1. Is this actually an opinion? 2. If it is an opinion how informed is it and why do I hold it?

This has to be the best article of the entire year: “No, It’s Not Your Opinion. You’re Just Wrong.”

It also not coincidentally is the root of the vast majority of the problems the world is currently facing. There are so many great quotes here, I can’t pick a favorite, so I’ll highlight the same one Kimb Quark did that brought my attention to it:

“There’s nothing wrong with an opinion on those things. The problem comes from people whose opinions are actually misconceptions. If you think vaccines cause autism you are expressing something factually wrong, not an opinion. The fact that you may still believe that vaccines cause autism does not move your misconception into the realm of valid opinion. Nor does the fact that many other share this opinion give it any more validity.”

Jef Rouner
in No, It’s Not Your Opinion. You’re Just Wrong | Houston Press

 

Pictured: A bunch of people who were murdered regardless of someone's opinion on the subject
Pictured: A bunch of people who were murdered regardless of someone’s opinion on the subject
Syndicated copies to:

Popular Science Books on Information Theory, Biology, and Complexity

The beginning of a four part series in which I provide a gradation of books and texts that lie in the intersection of the application of information theory, physics, and engineering practice to the area of biology.

Previously, I had made a large and somewhat random list of books which lie in the intersection of the application of information theory, physics, and engineering practice to the area of biology.  Below I’ll begin to do a somewhat better job of providing a finer gradation of technical level for both the hobbyist or the aspiring student who wishes to bring themselves to a higher level of understanding of these areas.  In future posts, I’ll try to begin classifying other texts into graduated strata as well.  The final list will be maintained here: Books at the Intersection of Information Theory and Biology.

Introductory / General Readership / Popular Science Books

These books are written on a generally non-technical level and give a broad overview of their topics with occasional forays into interesting or intriguing subtopics. They include little, if any, mathematical equations or conceptualization. Typically, any high school student should be able to read, follow, and understand the broad concepts behind these books.  Though often non-technical, these texts can give some useful insight into the topics at hand, even for the most advanced researchers.

Complexity: A Guided Tour by Melanie Mitchell (review)

Possibly one of the best places to start, this text gives a great overview of most of the major areas of study related to these fields.

Entropy Demystified: The Second Law Reduced to Plain Common Sense by Arieh Ben-Naim

One of the best books on the concept of entropy out there.  It can be read even by middle school students with no exposure to algebra and does a fantastic job of laying out the conceptualization of how entropy underlies large areas of the broader subject. Even those with Ph.D.’s in statistical thermodynamics can gain something useful from this lovely volume.

The Information: A History, a Theory, a Flood by James Gleick (review)

A relatively recent popular science volume covering various conceptualizations of what information is and how it’s been dealt with in science and engineering.  Though it has its flaws, its certainly a good introduction to the beginner, particularly with regard to history.

The Origin of Species by Charles Darwin

One of the most influential pieces of writing known to man, this classical text is the basis from which major strides in biology have been made as a result. A must read for everyone on the planet.

Information, Entropy, Life and the Universe: What We Know and What We Do Not Know by Arieh Ben-Naim

Information Theory and Evolution by John Avery

The Touchstone of Life: Molecular Information, Cell Communication, and the Foundations of Life by Werner R. Loewenstein (review)

Information Theory, Evolution, and the Origin of Life by Hubert P. Yockey

The four books above have a significant amount of overlap. Though one could read all of them, I recommend that those pressed for time choose Ben-Naim first. As I write this I’ll note that Ben-Naim’s book is scheduled for release on May 30, 2015, but he’s been kind enough to allow me to read an advance copy while it was in process; it gets my highest recommendation in its class. Loewenstein covers a bit more than Avery who also has a more basic presentation. Most who continue with the subject will later come across Yockey’s Information Theory and Molecular Biology which is similar to his text here but written at a slightly higher level of sophistication. Those who finish at this level of sophistication might want to try Yockey third instead.

The Red Queen: Sex and the Evolution of Human Nature by Matt Ridley

Grammatical Man: Information, Entropy, Language, and Life  by Jeremy Campbell

Life’s Ratchet: How Molecular Machines Extract Order from Chaos by Peter M. Hoffmann

Complexity: The Emerging Science at the Edge of Order and Chaos by M. Mitchell Waldrop

The Big Picture: On the Origins of Life, Meaning, and the Universe Itself (Dutton, May 10, 2016) 

In the coming weeks/months, I’ll try to continue putting recommended books on the remainder of the rest of the spectrum, the balance of which follows in outline form below. As always, I welcome suggestions and recommendations based on others’ experiences as well. If you’d like to suggest additional resources in any of the sections below, please do so via our suggestion box. For those interested in additional resources, please take a look at the ITBio Resources page which includes information about related research groups; references and journal articles; academic, research institutes, societies, groups, and organizations; and conferences, workshops, and symposia.

Lower Level Undergraduate

These books are written at a level that can be grasped and understood by most with a freshmen or sophomore university level. Coursework in math, science, and engineering will usually presume knowledge of calculus, basic probability theory, introductory physics, chemistry, and basic biology.

Upper Level Undergraduate

These books are written at a level that can be grasped and understood by those at a junior or senor university level. Coursework in math, science, and engineering may presume knowledge of probability theory, differential equations, linear algebra, complex analysis, abstract algebra, signal processing, organic chemistry, molecular biology, evolutionary theory, thermodynamics, advanced physics, and basic information theory.

Graduate Level

These books are written at a level that can be grasped and understood by most working at the level of a master’s level at most universities.  Coursework presumes all the previously mentioned classes, though may require a higher level of sub-specialization in one or more areas of mathematics, physics, biology, or engineering practice.  Because of the depth and breadth of disciplines covered here, many may feel the need to delve into areas outside of their particular specialization.

Syndicated copies to:

Schools of Thought in the Hard and Soft Sciences

A framework for determining the difference between the hard and soft sciences.

A recent post in one of the blogs at Discover Magazine the other day had me thinking about the shape of science over time.

Neuroscientists don’t seem to disagree on the big issues. Why are there no big ideas in neuroscience?

Neuroskeptic, Where Are The Big Ideas in Neuroscience? (Part 1)

The article made me wonder about the divide between the ‘soft’ and ‘hard’ sciences, and how we might better define and delineate them. Perhaps in a particular field, the greater the proliferation of “schools of though,” the more likely something is to be a soft science? (Or mathematically speaking, there’s an inverse relationship in a field between how well supported it is and the number of schools of thought it has.) I consider a school of thought to be a hypothetical/theoretical proposed structure meant to potentially help advance the state of the art and adherents join one of many varying camps while evidence is built up (or not) until one side carries the day.

Firmness of Science vs. # of Schools of Thought
Simple linear approximation of the relationship, though honestly something more similar to y=1/x which is asymptotic to the x and y axes is far more realistic.

Theorem: The greater the proliferation of “schools of though,” the more likely something is to be a soft science.

Generally in most of the hard sciences like physics, biology, or microbiology, there don’t seem to be any opposing or differing schools of thought. While in areas like psychology or philosophy they abound, and often have long-running debates between schools without any hard data or evidence to truly allow one school to win out over another. Perhaps as the structure of a particular science becomes more sound, the concept of schools of thought become more difficult to establish?

For some of the hard sciences, it would seem that schools of thought only exist at the bleeding edge of the state-of-the-art where there isn’t yet enough evidence to swing the field one way or another to firmer ground.

Example: Evolutionary Biology

We might consider the area of evolutionary biology in which definitive evidence in the fossil record is difficult to come by, so there’s room for the opposing thoughts for gradualism versus punctuated equilibrium to be individual schools. Outside of this, most of evolutionary theory is so firmly grounded that there aren’t other schools.

Example: Theoretical Physics

The relatively new field of string theory might be considered a school of thought, though there don’t seem to be a lot of other opposing schools at the moment. If it does, such a school surely exists, in part, because there isn’t the ability to validate it with predictions and current data. However, because of the strong mathematical supporting structure, I’ve yet to hear anyone use the concept of school of thought to describe string theory, which sits in a community which seems to believe its a foregone conclusion that it or something very close to it represents reality. (Though for counterpoint, see Lee Smolin’s The Trouble with Physics.)

Example: Mathematics

To my knowledge, I can’t recall the concept of school of thought ever being applied to mathematics except in the case of the Pythagorean School which historically is considered to have been almost as much a religion as a science. Because of its theoretical footings, I suppose there may never be competing schools, for even in the case of problems like P vs. NP, individuals may have some gut reaction to which way things are leaning, everyone ultimately knows it’s going to be one or the other (P=NP or P \neq NP). Many mathematicians also know that it’s useful to try to prove a theorem during the day and then try to disprove it (or find a counterexample) by night, so even internally and individually they’re self-segregating against creating schools of thought right from the start.

Example: Religion

Looking at the furthest end of the other side of the spectrum, because there is no verifiable way to prove that God exists, there has been an efflorescence of religions of nearly every size and shape since the beginning of humankind. Might we then presume that this is the softest of the ‘sciences’?

What examples or counter examples can you think of?

Syndicated copies to:

Uri Alon: Why Truly Innovative Science Demands a Leap into the Unknown

I recently ran across this TED talk and felt compelled to share it. It really highlights some of my own personal thoughts on how science should be taught and done in the modern world.  It also overlaps much of the reading I’ve been doing lately on innovation and creativity. If these don’t get you to watch, then perhaps mentioning that Alon manages to apply comedy and improvisation techniques to science will.

Uri Alon was already one of my scientific heroes, but this adds a lovely garnish.

 

 

To Understand God’s Thought…

Florence Nightingale, OM, RRC (1820-1910), English social reformer and statistician, founder of modern nursing, renaissance woman
in Florence Nightingale’s Wisdom, New York Times, 3/4/14

 

Florence Nightingale developed the polar pie chart to depict mortality causes in the Crimean War.
Florence Nightingale developed the polar pie chart to depict mortality causes in the Crimean War.

 

Syndicated copies to:

Workshop on Information Theoretic Incentives for Artificial Life

For those interested in the topics of information theory in biology and artificial life, Christoph SalgeGeorg MartiusKeyan Ghazi-Zahedi, and Daniel Polani have announced a Satellite Workshop on Information Theoretic Incentives for Artificial Life at the 14th International Conference on the Synthesis and Simulation of Living Systems (ALife 2014) to be held at the Javits Center, New York, on July 30 or 31st.

ALife2014 Banner

Their synopsis states:

Artificial Life aims to understand the basic and generic principles of life, and demonstrate this understanding by producing life-like systems based on those principles. In recent years, with the advent of the information age, and the widespread acceptance of information technology, our view of life has changed. Ideas such as “life is information processing” or “information holds the key to understanding life” have become more common. But what can information, or more formally Information Theory, offer to Artificial Life?

One relevant area is the motivation of behaviour for artificial agents, both virtual and real. Instead of learning to perform a specific task, informational measures can be used to define concepts such as boredom, empowerment or the ability to predict one’s own future. Intrinsic motivations derived from these concepts allow us to generate behaviour, ideally from an embodied and enactive perspective, which are based on basic but generic principles. The key questions here are: “What are the important intrinsic motivations a living agent has, and what behaviour can be produced by them?”

Related to an agent’s behaviour is also the question on how and where the necessary computation to realise this behaviour is performed. Can information be used to quantify the morphological computation of an embodied agent and to what degree are the computational limitations of an agent influencing its behaviour?

Another area of interest is the guidance of artificial evolution or adaptation. Assuming it is true that an agent wants to optimise its information processing, possibly obtain as much relevant information as possible for the cheapest computational cost, then what behaviour would naturally follow from that? Can the development of social interaction or collective phenomena be motivated by an informational gradient? Furthermore, evolution itself can be seen as a process in which an agent population obtains information from the environment, which begs the question of how this can be quantified, and how systems would adapt to maximise this information?

The common theme in those different scenarios is the identification and quantification of driving forces behind evolution, learning, behaviour and other crucial processes of life, in the hope that the implementation or optimisation of these measurements will allow us to construct life-like systems.

Details for submissions, acceptances, potential talks, and dates can be found via  Nihat Ay’s Research Group on Information Theory of Cognitive Systems. For more information on how to register, please visit the ALife 2014 homepage. If there are any questions, or if you just want to indicate interest in submitting or attending, please feel free to mail them at itialife@gmail.com.

According to their release, the open access journal Entropy will sponsor the workshop by an open call with a special issue on the topic of the workshop. More details will be announced to emails received via itialife@gmail.com and over the alife and connectionists mailing lists.

Syndicated copies to:

Why a Ph.D. in Physics is Worse Than Drugs

Jonathan I. Katz, Professor of Physics, Washington University, St. Louis, Mo.
in “Don’t Become a Scientist!”

 

In the essay, Dr. Katz provides a bevy of solid reasons why one shouldn’t become a researcher.  I highly recommend everyone read it and then carefully consider how we can turn these problems around.

Editor’s Note: The original article has since been moved to another server.

How might we end the war against science in America?

Syndicated copies to: