How Can We Apply Physics to Biology?

How Can We Apply Physics to Biology? by Philip Ball (nautil.us)
We don’t yet know quite what a physics of biology will consist of. But we won’t understand life without it.

This is an awesome little article with some interesting thought and philosophy on the current state of physics within biology and other related areas of study. It’s also got some snippets of history which aren’t frequently discussed in longer form texts.

Syndicated copies to:

Some Thoughts on Academic Publishing and “Who’s downloading pirated papers? Everyone” from Science | AAAS

Who's downloading pirated papers? Everyone by John Bohannon (Science | AAAS)
An exclusive look at data from the controversial web site Sci-Hub reveals that the whole world, both poor and rich, is reading pirated research papers.

Sci Hub has been in the news quite a bit over the past half a year and the bookmarked article here gives some interesting statistics. I’ll preface some of the following editorial critique with the fact that I love John Bohannon’s work; I’m glad he’s spent the time to do the research he has. Most of the rest of the critique is aimed at the publishing industry itself.

From a journalistic standpoint, I find it disingenuous that the article didn’t actually hyperlink to Sci Hub. Neither did it link out (or provide a full quote) to Alicia Wise’s Twitter post(s) nor link to her rebuttal list of 20 ways to access their content freely or inexpensively. Of course both of these are editorial related, and perhaps the rebuttal was so flimsy as to be unworthy of a link from such an esteemed publication anyway.

Sadly, Elsevier’s list of 20 ways of free/inexpensive access doesn’t really provide any simple coverage for graduate students or researchers in poorer countries which are the likeliest group of people using Sci Hub, unless they’re going to fraudulently claim they’re part of a class which they’re not, and is this morally any better than the original theft method? It’s almost assuredly never used by patients, which seem to be covered under one of the options, as the option to do so is painfully undiscoverable past their typical $30/paper firewalls. Their patchwork hodgepodge of free access is so difficult to not only discern, but one must keep in mind that this is just one of dozens of publishers a researcher must navigate to find the one thing they’re looking for right now (not to mention the thousands of times they need to do this throughout a year, much less a career).

Consider this experiment, which could be a good follow up to the article: is it easier to find and download a paper by title/author/DOI via Sci Hub (a minute) versus through any of the other publishers’ platforms with a university subscription (several minutes) or without a subscription (an hour or more to days)? Just consider the time it would take to dig up every one of 30 references in an average journal article: maybe just a half an hour via Sci Hub versus the days and/or weeks it would take to jump through the multiple hoops to first discover, read about, and then gain access and then download them from the over 14 providers (and this presumes the others provide some type of “access” like Elsevier).

Those who lived through the Napster revolution in music will realize that the dead simplicity of their system is primarily what helped kill the music business compared to the ecosystem that exists now with easy access through the multiple streaming sites (Spotify, Pandora, etc.) or inexpensive paid options like (iTunes). If the publishing business doesn’t want to get completely killed, they’re going to need to create the iTunes of academia. I suspect they’ll have internal bean-counters watching the percentage of the total (now apparently 5%) and will probably only do something before it passes a much larger threshold, though I imagine that they’re really hoping that the number stays stable which signals that they’re not really concerned. They’re far more likely to continue to maintain their status quo practices.

Some of this ease-of-access argument is truly borne out by the statistics of open access papers which are downloaded by Sci Hub–it’s simply easier to both find and download them that way compared to traditional methods; there’s one simple pathway for both discovery and download. Surely the publishers, without colluding, could come up with a standardized method or protocol for finding and accessing their material cheaply and easily?

“Hart-Davidson obtained more than 100 years of biology papers the hard way—legally with the help of the publishers. ‘It took an entire year just to get permission,’ says Thomas Padilla, the MSU librarian who did the negotiating.” John Bohannon in Who’s downloading pirated papers? Everyone

Personally, I use use relatively advanced tools like LibX, which happens to be offered by my institution and which I feel isn’t very well known, and it still takes me longer to find and download a paper than it would via Sci Hub. God forbid if some enterprising hacker were to create a LibX community version for Sci Hub. Come to think of it, why haven’t any of the dozens of publishers built and supported simple tools like LibX which make their content easy to access? If we consider the analogy of academic papers to the introduction of machine guns in World War I, why should modern researchers still be using single-load rifles against an enemy that has access to nuclear weaponry?

My last thought here comes on the heels of the two tweets from Alicia Wise mentioned, but not shown in the article:

She mentions that the New York Times charges more than Elsevier does for a full subscription. This is tremendously disingenuous as Elsevier is but one of dozens of publishers for which one would have to subscribe to have access to the full panoply of material researchers are typically looking for. Further, Elsevier nor their competitors are making their material as easy to find and access as the New York Times does. Neither do they discount access to the point that they attempt to find the subscription point that their users find financially acceptable. Case in point: while I often read the New York Times, I rarely go over their monthly limit of articles to need any type of paid subscription. Solely because they made me an interesting offer to subscribe for 8 weeks for 99 cents, I took them up on it and renewed that deal for another subsequent 8 weeks. Not finding it worth the full $35/month price point I attempted to cancel. I had to cancel the subscription via phone, but why? The NYT customer rep made me no less than 5 different offers at ever decreasing price points–including the 99 cents for 8 weeks which I had been getting!!–to try to keep my subscription. Elsevier, nor any of their competitors has ever tried (much less so hard) to earn my business. (I’ll further posit that it’s because it’s easier to fleece at the institutional level with bulk negotiation, a model not too dissimilar to the textbook business pressuring professors on textbook adoption rather than trying to sell directly the end consumer–the student, which I’ve written about before.)

(Trigger alert: Apophasis to come) And none of this is to mention the quality control that is (or isn’t) put into the journals or papers themselves. Fortunately one need’t even go further than Bohannon’s other writings like Who’s Afraid of Peer Review? Then there are the hordes of articles on poor research design and misuse of statistical analysis and inability to repeat experiments. Not to give them any ideas, but lately it seems like Elsevier buying the Enquirer and charging $30 per article might not be a bad business decision. Maybe they just don’t want to play second-banana to TMZ?

Interestingly there’s a survey at the end of the article which indicates some additional sources of academic copyright infringement. I do have to wonder how the data for the survey will be used? There’s always the possibility that logged in users will be indicating they’re circumventing copyright and opening themselves up to litigation.

I also found the concept of using the massive data store as a means of applied corpus linguistics for science an entertaining proposition. This type of research could mean great things for science communication in general. I have heard of people attempting to do such meta-analysis to guide the purchase of potential intellectual property for patent trolling as well.

Finally, for those who haven’t done it (ever or recently), I’ll recommend that it’s certainly well worth their time and energy to attend one or more of the many 30-60 minute sessions most academic libraries offer at the beginning of their academic terms to train library users on research tools and methods. You’ll save yourself a huge amount of time.

Syndicated copies to:

Physicists Hunt For The Big Bang’s Triangles | Quanta Magazine

Physicists Hunt for the Big Bang'€™s Triangles (Quanta Magazine )

“The notion that counting more shapes in the sky will reveal more details of the Big Bang is implied in a central principle of quantum physics known as “unitarity.” Unitarity dictates that the probabilities of all possible quantum states of the universe must add up to one, now and forever; thus, information, which is stored in quantum states, can never be lost — only scrambled. This means that all information about the birth of the cosmos remains encoded in its present state, and the more precisely cosmologists know the latter, the more they can learn about the former.”

Syndicated copies to:

How can we be sure old books were ever read? – University of Glasgow Library

How can we be sure old books were ever read? by Robert MacLean (University of Glasgow Library)
Owning a book isn’t the same as reading it; we need only look at our own bloated bookshelves for confirmation.

This is a great little overview for people reading the books of others. There are also lots of great links to other resources.

Syndicated copies to:

A new view of the tree of life

A new view of the tree of life (Nature Microbiology)
An update to the €˜tree of life has revealed a dominance of bacterial diversity in many ecosystems and extensive evolution in some branches of the tree. It also highlights how few organisms we have been able to cultivate for further investigation.

Abstract

The tree of life is one of the most important organizing principles in biology. Gene surveys suggest the existence of an enormous number of branches, but even an approximation of the full scale of the tree has remained elusive. Recent depictions of the tree of life have focused either on the nature of deep evolutionary relationships or on the known, well-classified diversity of life with an emphasis on eukaryotes. These approaches overlook the dramatic change in our understanding of life’s diversity resulting from genomic sampling of previously unexamined environments. New methods to generate genome sequences illuminate the identity of organisms and their metabolic capacities, placing them in community and ecosystem contexts. Here, we use new genomic data from over 1,000 uncultivated and little known organisms, together with published sequences, to infer a dramatically expanded version of the tree of life, with Bacteria, Archaea and Eukarya included. The depiction is both a global overview and a snapshot of the diversity within each major lineage. The results reveal the dominance of bacterial diversification and underline the importance of organisms lacking isolated representatives, with substantial evolution concentrated in a major radiation of such organisms. This tree highlights major lineages currently underrepresented in biogeochemical models and identifies radiations that are probably important for future evolutionary analyses.

Laura A. Hug, Brett J. Baker, Karthik Anantharaman, Christopher T. Brown, Alexander J. Probst, Cindy J. Castelle, Cristina N. Butterfield, Alex W. Hernsdorf, Yuki Amano, Kotaro Ise, Yohey Suzuki, Natasha Dudek, David A. Relman, Kari M. Finstad, Ronald Amundson, Brian C. Thomas & Jillian F. Banfield in Nature Microbiology, Article number: 16048 (2016) doi:10.1038/nmicrobiol.2016.48

 

A reformatted view of the tree in Fig. 1in which each major lineage represents the same amount of evolutionary distance.
A reformatted view of the tree in Fig. 1in which each major lineage represents the same amount of evolutionary distance.

Carl Zimmer also has a nice little write up of the paper in today’s New York Times:

Carl Zimmer
in Scientists Unveil New ‘Tree of Life’ from The New York Times 4/11/16

 

Syndicated copies to:

“ALOHA to the Web”: Dr. Norm Abramson to give 2016 Viterbi Lecture at USC

USC - Viterbi School of Engineering - Dr. Norm Abramson (viterbi.usc.edu)

“ALOHA to the Web”

Dr. Norman Abramson, Professor Emeritus, University of Hawaii

Lecture Information

Thursday, April 14, 2016
Hughes Electrical Engineering Center (EEB)
Reception 3:00pm (EEB Courtyard)
Lecture 4:00pm (EEB 132)

Abstract

Wireless access to the Internet today is provided predominantly by random access ALOHA channels connecting a wide variety of user devices. ALOHA channels were first analyzed, implemented and demonstrated in the ALOHA network at the University of Hawaii in June, 1971. Information Theory has provided a constant guide for the design of more efficient channels and network architectures for ALOHA access to the web.

In this talk we examine the architecture of networks using ALOHA channels and the statistics of traffic within these channels. That traffic is composed of user and app oriented information augmented by protocol information inserted for the benefit of network operation. A simple application of basic Information Theory can provide a surprising guide to the amount of protocol information required for typical web applications.

We contrast this theoretical guide of the amount of protocol information required with measurements of protocol generated information taken on real network traffic. Wireless access to the web is not as efficient as you might guess.

Biography

Norman Abramson received an A.B. in physics from Harvard College in 1953, an M.A. in physics from UCLA in 1955, and a Ph.D. in Electrical Engineering from Stanford in 1958.

He was an assistant professor and associate professor of electrical engineering at Stanford from 1958 to 1965. From 1967 to 1995 he was Professor of Electrical Engineering, Professor of Information and Computer Science, Chairman of the Department of Information and Computer Science, and Director of the ALOHA System at the University of Hawaii in Honolulu. He is now Professor Emeritus of Electrical Engineering at the University of Hawaii. He has held visiting appointments at Berkeley (1965), Harvard (1966) and MIT (1980).

Abramson is the recipient of several major awards for his work on random access channels and the ALOHA Network, the first wireless data network. The ALOHA Network went into operation in Hawaii in June, 1971. Among these awards are the Eduard Rhein Foundation Technology Award (Munich, 2000), the IEEE Alexander Graham Bell Medal (Philadelphia, 2007) and the NEC C&C Foundation Award (Tokyo, 2011).

Syndicated copies to:

2016 North-American School of Information Theory, June 21-23

2016 North-American School of Information Theory, June 21-23, 2016 (itsoc.org)

The 2016 School of information will be hosted at Duke University, June 21-23. It is sponsored by the IEEE Information Theory Society, Duke University, the Center for Science of Information, and the National Science Foundation. The school provides a venue where doctoral and postdoctoral students can learn from distinguished professors in information theory, meet with fellow researchers, and form collaborations.

Program and Lectures

The daily schedule will consist of morning and afternoon lectures separated by a lunch break with poster sessions. Students from all research areas are welcome to attend and present their own research via a poster during the school.  The school will host lectures on core areas of information theory and interdisciplinary topics. The following lecturers are confirmed:

  • Helmut Bölcskei (ETH Zurich): The Mathematics of Deep Learning
  • Natasha Devroye (University of Illinois, Chicago): The Interference Channel
  • René Vidal (Johns Hopkins University): Global Optimality in Deep Learning and Beyond
  • Tsachy Weissman (Stanford University): Information Processing under Logarithmic Loss
  • Aylin Yener (Pennsylvania State University): Information-Theoretic Security

Logistics

Applications will be available on March 15 and will be evaluated starting April 1.  Accepted students must register by May 15, 2016.  The registration fee of $200 will include food and 3 nights accommodation in a single-occupancy room.  We suggest that attendees fly into the Raleigh-Durham (RDU) airport located about 30 minutes from the Duke campus. Housing will be available for check-in on the afternoon of June 20th.  The main part of the program will conclude after lunch on June 23rd so that attendees can fly home that evening.

To Apply: click “register” here (fee will accepted later after acceptance)

Administrative Contact: Kathy Peterson, itschool2016@gmail.com

Organizing Committee

Henry Pfister (chair) (Duke University), Dror Baron (North Carolina State University), Matthieu Bloch (Georgia Tech), Rob Calderbank (Duke University), Galen Reeves (Duke University). Advisors: Gerhard Kramer (Technical University of Munich) and Andrea Goldsmith (Stanford)

Sponsors

Syndicated copies to:

Can a Field in Which Physicists Think Like Economists Help Us Achieve Universal Knowledge?

Can a Field in Which Physicists Think Like Economists Help Us Achieve Universal Knowledge? by David Auerbach (Slate Magazine)
The Theory of Everything and Then Some: In complexity theory, physicists try to understand economics while sociologists think like biologists. Can they bring us any closer to universal knowledge?

A discussion of complexity and complexity theorist John H. Miller’s new book: A Crude Look at the Whole: The Science of Complex Systems in Business, Life, and Society.

Syndicated copies to:

The Hidden Algorithms Underlying Life | Quanta Magazine

Searching for the Algorithms Underlying Life by John Pavlus (Quanta Magazine)
The biological world is computational at its core, argues computer scientist Leslie Valiant.

I did expect something more entertaining from Google when I searched for “what will happen if I squeeze a paper cup full of hot coffee?”

Syndicated copies to:

What is Information? by Christoph Adami

What is Information? [1601.06176] (arxiv.org)
Information is a precise concept that can be defined mathematically, but its relationship to what we call "knowledge" is not always made clear. Furthermore, the concepts "entropy" and "information", while deeply related, are distinct and must be used with care, something that is not always achieved in the literature. In this elementary introduction, the concepts of entropy and information are laid out one by one, explained intuitively, but defined rigorously. I argue that a proper understanding of information in terms of prediction is key to a number of disciplines beyond engineering, such as physics and biology.

A proper understanding of information in terms of prediction is key to a number of disciplines beyond engineering, such as physics and biology.

Comments: 19 pages, 2 figures. To appear in Philosophical Transaction of the Royal Society A
Subjects: Adaptation and Self-Organizing Systems (nlin.AO); Information Theory (cs.IT); Biological Physics (physics.bio-ph); Quantitative Methods (q-bio.QM)
Cite as:arXiv:1601.06176 [nlin.AO] (or arXiv:1601.06176v1 [nlin.AO] for this version)

From: Christoph Adami
[v1] Fri, 22 Jan 2016 21:35:44 GMT (151kb,D) [.pdf]

Source: Christoph Adami [1601.06176] What is Information? on arXiv

Syndicated copies to:

Marvin Minsky, Pioneer in Artificial Intelligence, Dies at 88 | The New York Times

Professor Minsky laid the foundation for the field by demonstrating the possibilities of imparting common-sense reasoning to computers.

Source: Marvin Minsky, Pioneer in Artificial Intelligence, Dies at 88 – The New York Times

Quantum Biological Information Theory by Ivan B. Djordjevic | Springer

Quantum Biological Information Theory (Springer, 2015)

Springer recently announced the publication of the book Quantum Biological Information Theory by Ivan B. Djordjevic, in which I’m sure many readers here will have interest. I hope to have a review of it shortly after I’ve gotten a copy. Until then…

From the publisher’s website:

This book is a self-contained, tutorial-based introduction to quantum information theory and quantum biology. It serves as a single-source reference to the topic for researchers in bioengineering, communications engineering, electrical engineering, applied mathematics, biology, computer science, and physics. The book provides all the essential principles of the quantum biological information theory required to describe the quantum information transfer from DNA to proteins, the sources of genetic noise and genetic errors as well as their effects.

  • Integrates quantum information and quantum biology concepts;
  • Assumes only knowledge of basic concepts of vector algebra at undergraduate level;
  • Provides a thorough introduction to basic concepts of quantum information processing, quantum information theory, and quantum biology;
  • Includes in-depth discussion of the quantum biological channel modelling, quantum biological channel capacity calculation, quantum models of aging, quantum models of evolution, quantum models on tumor and cancer development, quantum modeling of bird navigation compass, quantum aspects of photosynthesis, quantum biological error correction.

Source: Quantum Biological Information Theory | Ivan B. Djordjevic | Springer

9783319228150I’ll note that it looks like it also assumes some reasonable facility with quantum mechanics in addition to the material listed above.

Springer also has a downloadable copy of the preface and a relatively extensive table of contents for those looking for a preview. Dr. Djordjevic has been added to the ever growing list of researchers doing work at the intersection of information theory and biology.

Syndicated copies to:

The Information Theory of Life | Quanta Magazine

The Information Theory of Life (Quanta Magazine)
The Information Theory of Life: The polymath Christoph Adami is investigating life’s origins by reimagining living things as self-perpetuating information strings.

Syndicated copies to:

Why math? JHU mathematician on teaching, theory, and the value of math in a modern world | Hub

Why math? JHU mathematician on teaching, theory, and the value of math in a modern world (The Hub)

Great to see this interview with my friend and mathematician Richard Brown from Johns Hopkins Unviersity.  Psst: He’s got an interesting little blog, or you can follow some of his work on Facebook and Twitter.

Click through for the full interview: Q+A with Richard Brown, director of undergraduate studies in Johns Hopkins University’s Department of Mathematics

 

Syndicated copies to:

3 Rules of Academic Blogging

3 Rules of Academic Blogging (The Chronicle of Higher Education)
Not only is the form alive and well, but one of its most vibrant subsections is in academe.

1. Pick the right platform.
2. Write whatever you want.
3. Write for the sake of writing.

Syndicated copies to: