Paper is one of the simplest and most essential pieces of human technology. For the past two millennia, the ability to produce it in ever more efficient ways has supported the proliferation of literacy, media, religion, education, commerce, and art; it has formed the foundation of civilizations, promoting revolutions and restoring stability. One has only to look at history’s greatest press run, which produced 6.5 billion copies of Máo zhuxí yulu, Quotations from Chairman Mao Tse-tung (Zedong)―which doesn’t include editions in 37 foreign languages and in braille―to appreciate the range and influence of a single publication, in paper. Or take the fact that one of history’s most revered artists, Leonardo da Vinci, left behind only 15 paintings but 4,000 works on paper. And though the colonies were at the time calling for a boycott of all British goods, the one exception they made speaks to the essentiality of the material; they penned the Declaration of Independence on British paper. Now, amid discussion of “going paperless”―and as speculation about the effects of a digitally dependent society grows rampant―we’ve come to a world-historic juncture. Thousands of years ago, Socrates and Plato warned that written language would be the end of “true knowledge,” replacing the need to exercise memory and think through complex questions. Similar arguments were made about the switch from handwritten to printed books, and today about the role of computer technology. By tracing paper’s evolution from antiquity to the present, with an emphasis on the contributions made in Asia and the Middle East, Mark Kurlansky challenges common assumptions about technology’s influence, affirming that paper is here to stay. Paper will be the commodity history that guides us forward in the twenty-first century and illuminates our times.
“Amerikan Krazy: Life Out of Balance” takes part of its name from the new book <a href="http://boffosockobooks.com/books/authors/henry-james-korn/amerikan-krazy/">"Amerikan Krazy”</a> by <a href="http://www.henryjameskorn.com">Henry James Korn</a>. From 2008 to 2013, Korn worked at the Orange County Great Park. He was responsible for the creation of the Palm Court arts complex and culture, music, art and history programs.<br /><br /> “The book is very much about total corporate control of public and private space,” Korn said. The story follows a wounded Marine veteran haunted after having missed the chance to assassinate a presidential candidate who later causes massive human suffering and wreaks havoc on America’s wealth and democracy.<br /><br /> It’s a way of understanding what’s happening in politics now, Korn said.<br /><br /> “Because if ever there was a recognition that our public life and politics have gone crazy, it’s this moment.”
If you haven’t manage to make it down, this exhibition is running for another week at BC Space!Syndicated copies to:
Jeremy England, a 31-year-old physicist at MIT, thinks he has found the underlying physics driving the origin and evolution of life.
- Jeremy L. England Lab
- Statistical physics of self-replication, Jeremy L. England; J. Chem. Phys. 139, 121923 (2013); doi: 10.1063/1.4818538
- Statistical Physics of Adaptation, Nikolai Perunov, Robert Marsland, and Jeremy England, arXiv, December 8, 2014
- Entropy production fluctuation theorem and the nonequilibrium work relation for free energy differences, Gavin E. Crooks, arXiv, February 1, 2008
- Life as a manifestation of the second law of thermodynamics, E.D. Schneider, J.J. Kay, doi:10.1016/0895-7177(94)90188-0, Mathematical and Computer Modelling, Volume 19, Issues 6–8, March–April 1994, Pages 25-48
Methods originally developed in Information Theory have found wide applicability in computational neuroscience. Beyond these original methods there is a need to develop novel tools and approaches that are driven by problems arising in neuroscience. A number of researchers in computational/systems neuroscience and in information/communication theory are investigating problems of information representation and processing. While the goals are often the same, these researchers bring different perspectives and points of view to a common set of neuroscience problems. Often they participate in different fora and their interaction is limited. The goal of the workshop is to bring some of these researchers together to discuss challenges posed by neuroscience and to exchange ideas and present their latest work. The workshop is targeted towards computational and systems neuroscientists with interest in methods of information theory as well as information/communication theorists with interest in neuroscience.
A BIRS / Casa Matemática Oaxaca Workshop arriving in Oaxaca, Mexico Sunday, July 31 and departing Friday August 5, 2016
Syndicated copies to:
Evolutionary biology is a rapidly changing field, confronted to many societal problems of increasing importance: impact of global changes, emerging epidemics, antibiotic resistant bacteria… As a consequence, a number of new problematics have appeared over the last decade, challenging the existing mathematical models. There exists thus a demand in the biology community for new mathematical models allowing a qualitative or quantitative description of complex evolution problems. In particular, in the societal problems mentioned above, evolution is often interacting with phenomena of a different nature: interaction with other organisms, spatial dynamics, age structure, invasion processes, time/space heterogeneous environment… The development of mathematical models able to deal with those complex interactions is an ambitious task. Evolutionary biology is interested in the evolution of species. This process is a combination of several phenomena, some occurring at the individual level (e.g. mutations), others at the level of the entire population (competition for resources), often consisting of a very large number of individuals. the presence of very different scales is indeed at the core of theoretical evolutionary biology, and at the origin of many of the difficulties that biologists are facing. The development of new mathematical models thus requires a joint work of three different communities of researchers: specialists of partial differential equations, specialists of probability theory, and theoretical biologists. The goal of this workshop is to gather researchers from each of these communities, currently working on close problematics. Those communities have usually few interactions, and this meeting would give them the opportunity to discuss and work around a few biological thematics that are especially challenging mathematically, and play a crucial role for biological applications.
The role of a spatial structure in models for evolution: The introduction of a spatial structure in evolutionary biology models is often challenging. It is however well known that local adaptation is frequent in nature: field data show that the phenotypes of a given species change considerably across its range. The spatial dynamics of a population can also have a deep impact on its evolution. Assessing e.g. the impact of global changes on species requires the development of robust mathematical models for spatially structured populations.
The first type of models used by theoretical biologists for this type of problems are IBM (Individual Based Models), which describe the evolution of a finite number of individuals, characterized by their position and a phenotype. The mathematical analysis of IBM in spatially homogeneous situations has provided several methods that have been successful in the theoretical biology community (see the theory of Adaptive Dynamics). On the contrary, very few results exist so far on the qualitative properties of such models for spatially structured populations.
The second class of mathematical approach for this type of problem is based on ”infinite dimensional” reaction-diffusion: the population is structured by a continuous phenotypic trait, that affects its ability to disperse (diffusion), or to reproduce (reaction). This type of model can be obtained as a large population limit of IBM. The main difficulty of these models (in the simpler case of asexual populations) is the term modeling the competition from resources, that appears as a non local competition term. This term prevents the use of classical reaction diffusion tools such as the comparison principle and sliding methods. Recently, promising progress has been made, based on tools from elliptic equations and/or Hamilton-Jacobi equations. The effects of small populations can however not be observed on such models. The extension of these models and methods to include these effects will be discussed during the workshop.
Eco-evolution models for sexual populations:An essential question already stated by Darwin and Fisher and which stays for the moment without answer (although it continues to intrigue the evolutionary biologists) is: ”Why does sexual reproduction maintain?” Indeed this reproduction way is very costly since it implies a large number of gametes, the mating and the choice of a compatible partner. During the meiosis phasis, half of the genetical information is lost. Moreover, the males have to be fed and during the sexual mating, individual are easy preys for predators. A partial answer is that recombination plays a main role by better eliminating the deleterious mutations and by increasing the diversity. Nevertheless, this theory is not completely satisfying and many researches are devoted to understanding evolution of sexual populations and comparison between asexual and sexual reproduction. Several models exist to model the influence of sexual reproduction on evolving species. The difficulty compared to asexual populations is that a detailed description of the genetic basis of phenotypes is required, and in particular include recombinations. For sexual populations, recombination plays a main role and it is essential to understand. All models require strong biological simplifications, the development of relevant mathematical methods for such mechanisms then requires a joint work of mathematicians and biologists. This workshop will be an opportunity to set up such collaborations.
The first type of model considers a small number of diploid loci (typically one locus and two alleles), while the rest of the genome is considered as fixed. One can then define the fitness of every combination of alleles. While allowing the modeling of specific sexual effects (such as dominant/recessive alleles), this approach neglects the rest of the genome (and it is known that phenotypes are typically influenced by a large number of loci). An opposite approach is to consider a large number of loci, each locus having a small and additive impact on the considered phenotype. This approach then neglects many microscopic phenomena (epistasis, dominant/recessive alleles…), but allows the derivation of a deterministic model, called the infinitesimal model, in the case of a large population. The construction of a good mathematical framework for intermediate situation would be an important step forward.
The evolution of recombination and sex is very sensitive to the interaction between several evolutionary forces (selection, migration, genetic drift…). Modeling these interactions is particularly challenging and our understanding of the recombination evolution is often limited by strong assumptions regarding demography, the relative strength of these different evolutionary forces, the lack of spatial structure… The development of a more general theoretical framework based on new mathematical developments would be particularly valuable.
Another problem, that has received little attention so far and is worth addressing, is the modeling of the genetic material exchanges in asexual population. This phenomena is frequent in micro-organisms : horizontal gene transfers in bacteria, reassortment or recombination in viruses. These phenomena share some features with sexual reproduction. It would be interesting to see if the effect of this phenomena can be seen as a perturbation of existing asexual models. This would in particular be interesting in spatially structured populations (e.g. viral epidemics), since the the mathematical analysis of spatially structured asexual populations is improving rapidly.
Modeling in evolutionary epidemiology: Mathematical epidemiology has been developing since more than a century ago. Yet, the integration of population genetics phenomena to epidemiology is relatively recent. Microbial pathogens (bacteria and viruses) are particularly interesting organisms because their short generation times and large mutation rates allow them to adapt relatively fast to changing environments. As a consequence, ecological (demography) and evolutionary (population genetics) processes often occur at the same pace. This raises many interesting problems.
A first challenge is the modeling of the spatial dynamics of an epidemics. The parasites can evolve during the epidemics of a new host population, either to adapt to a heterogeneous environment, or because it will itself modify the environment as it invades. The applications of such studies are numerous: antibiotic management, agriculture… An aspect of this problem for which our workshop can bring a significant contribution (thanks to the diversity of its participants) is the evolution of the pathogen diversity. During the large expansion produced by an epidemics, there is a loss of diversity in the invading parasites, since most pathogens originate from a few parents. The development of mathematical models for those phenomena is challenging: only a small number of pathogens are present ahead of the epidemic front, while the number of parasites rapidly become very large after the infection. The interaction between a stochastic micro scale and a deterministic macro scale is apparent here, and deserves a rigorous mathematical analysis.
Another interesting phenomena is the effect of a sudden change of the environment on a population of pathogens. Examples of such situations are for instance the antibiotic treatment of an infected patients, or the transmission of a parasite to a new host species (transmission of the avian influenza to human beings, for instance). Related experiments are relatively easy to perform, and called evolutionary rescue experiments. So far, this question has received limited attention from the mathematical community. The key is to estimate the probability that a mutant well adapted to the new environment existed in the original population, or will appear soon after the environmental change. Interactions between biologists specialists of those questions and mathematicians should lead to new mathematical problems.
Inspiration for artificial biologically inspired computing is often drawn from neural systems. This article shows how to analyze neural systems using information theory with the aim of obtaining constraints that help to identify the algorithms run by neural systems and the information they represent. Algorithms and representations identified this way may then guide the design of biologically inspired computing systems. The material covered includes the necessary introduction to information theory and to the estimation of information-theoretic quantities from neural recordings. We then show how to analyze the information encoded in a system about its environment, and also discuss recent methodological developments on the question of how much information each agent carries about the environment either uniquely or redundantly or synergistically together with others. Last, we introduce the framework of local information dynamics, where information processing is partitioned into component processes of information storage, transfer, and modification – locally in space and time. We close by discussing example applications of these measures to neural data and other complex systems.
This is the draft version of a textbook, which aims to introduce the quantum information science viewpoints on condensed matter physics to graduate students in physics (or interested researchers). We keep the writing in a self-consistent way, requiring minimum background in quantum information science. Basic knowledge in undergraduate quantum physics and condensed matter physics is assumed. We start slowly from the basic ideas in quantum information theory, but wish to eventually bring the readers to the frontiers of research in condensed matter physics, including topological phases of matter, tensor networks, and symmetry-protected topological phases.
Running a brain-twisting thought experiment for real shows that information is a physical thing – so can we now harness the most elusive entity in the cosmos?
This is a nice little overview article of some of the history of thermodynamics relating to information in physics and includes some recent physics advances as well. There are a few references to applications in biology at the micro level as well.
- Second Law of Thermodynamics with Discrete Quantum Feedback Control by Takahiro Sagawa and Masahito Ueda; Phys. Rev. Lett. 100, 080403 – Published 26 February 2008
- Work and information processing in a solvable model of Maxwell’s demon by Dibyendu Mandal and Christopher Jarzynski; PNAS vol. 109 no. 29, July 17, 2012
- Thermodynamic Costs of Information Processing in Sensory Adaptation by Pablo Sartori, Léo Granger, Chiu Fan Lee, and Jordan M. Horowitz; PLOS December 11, 2014 http://dx.doi.org.sci-hub.cc/10.1371/journal.pcbi.1003974
- Intermittent transcription dynamics for the rapid production of long transcripts of high fidelity by Depken M1, Parrondo JM, Grill SW; Cell Rep. 2013 Oct 31;5(2):521-30. doi: 10.1016/j.celrep.2013.09.007
- The stepping motor protein as a feedback control ratchet by Martin Bier; BioSystems 88 (2007) 301–307
In his 2010 book, Life Ascending: The Ten Great Inventions of Evolution, Nick Lane, a biochemist at University College London, explores with eloquence and clarity the big questions of life: how it began, why we age and die, and why we have sex. Lane been steadily constructing an alternative view of evolution to the one in which genes explain it all. He argues that some of the major events during evolutionary history, including the origin of life itself, are best understood by considering where the energy comes from and how it is used. Lane describes these ideas in his 2015 book, The Vital Question: Why Is Life the Way It Is?. Recently Bill Gates called it “an amazing inquiry into the origins of life,” adding, Lane “is one of those original thinkers who make you say: More people should know about this guy’s work.” Nautilus caught up with Lane in his laboratory in London and asked him about his ideas on aging, sex, and death.
Biochemist Nick Lane explains the elements of life, sex, and aging in an engaging popular science interview.
- The Vital Question: Energy, Evolution, and the Origins of Complex Life
- Life Ascending: The Ten Great Inventions of Evolution
- Power, Sex, Suicide: Mitochondria and the Meaning of Life
- Oxygen: The molecule that made the world
...I rejoice that a major new database was launched today. It’s not in my area, so I won’t be using it, but I am nevertheless very excited that it exists. It is called the L-functions and modular forms database. The thinking behind the site is that lots of number theorists have privately done lots of difficult calculations concerning L-functions, modular forms, and related objects. Presumably up to now there has been a great deal of duplication, because by no means all these calculations make it into papers, and even if they do it may be hard to find the right paper. But now there is a big database of these objects, with a large amount of information about each one, as well as a great big graph of connections between them. I will be very curious to know whether it speeds up research in number theory: I hope it will become a completely standard tool in the area and inspire people in other areas to create databases of their own.
…I rejoice that a major new database was launched today. It’s not in my area, so I won’t be using it, but I am nevertheless very excited that it exists. It is called the L-functions and modular forms database. The thinking behind the site is that lots of number theorists have privately done lots of difficult calculations concerning L-functions, modular forms, and related objects. Presumably up to now there has been a great deal of duplication, because by no means all these calculations make it into papers, and even if they do it may be hard to find the right paper. But now there is a big database of these objects, with a large amount of information about each one, as well as a great big graph of connections between them. I will be very curious to know whether it speeds up research in number theory: I hope it will become a completely standard tool in the area and inspire people in other areas to create databases of their own.
–Tim GowersSyndicated copies to:
Tom M. Apostol, professor of mathematics, emeritus at California Institute of Technology passed away on May 8, 2016. He was 92.
My proverbial mathematical great-grandfather passed away yesterday.
As many know, for over a decade, I’ve been studying a variety of areas of advanced abstract mathematics with Michael Miller. Mike Miller received his Ph.D. in 1974 (UCLA) under the supervision of Basil Gordon who in turn received his Ph.D. in 1956 (CalTech) under the supervision of Tom M. Apostol.
Incidentally going back directly three generations is Markov and before that Chebyshev and two generations before that Lobachevsky.
Sadly, I never got to have Tom as a teacher directly myself, though I did get to meet him several times in (what mathematicians might call) social situations. I did have the advantage of delving into his two volumes of Calculus as well as referring to his book on Analytic Number Theory. If it’s been a while since you’ve looked at calculus, I highly recommend an evening or two by the fire with a glass of wine while you revel in Calculus, Vol 1 or Calculus, Vol 2.
It’s useful to take a moment to remember our intellectual antecedents, so in honor of Tom’s passing, I recommend the bookmarked very short obituary (I’m sure more will follow), this obituary of Basil, and this issue of the Notices of the AMS celebrating Basil as well. I also came across a copy of Fascinating Mathematical People which has a great section on Tom and incidentally includes some rare younger photos of Sol Golomb who suddenly passed away last Sunday. (It’s obviously been a tough week for me and math in Southern California this week.)Syndicated copies to:
We don’t yet know quite what a physics of biology will consist of. But we won’t understand life without it.
This is an awesome little article with some interesting thought and philosophy on the current state of physics within biology and other related areas of study. It’s also got some snippets of history which aren’t frequently discussed in longer form texts.Syndicated copies to:
An exclusive look at data from the controversial web site Sci-Hub reveals that the whole world, both poor and rich, is reading pirated research papers.
Sci Hub has been in the news quite a bit over the past half a year and the bookmarked article here gives some interesting statistics. I’ll preface some of the following editorial critique with the fact that I love John Bohannon’s work; I’m glad he’s spent the time to do the research he has. Most of the rest of the critique is aimed at the publishing industry itself.
From a journalistic standpoint, I find it disingenuous that the article didn’t actually hyperlink to Sci Hub. Neither did it link out (or provide a full quote) to Alicia Wise’s Twitter post(s) nor link to her rebuttal list of 20 ways to access their content freely or inexpensively. Of course both of these are editorial related, and perhaps the rebuttal was so flimsy as to be unworthy of a link from such an esteemed publication anyway.
Sadly, Elsevier’s list of 20 ways of free/inexpensive access doesn’t really provide any simple coverage for graduate students or researchers in poorer countries which are the likeliest group of people using Sci Hub, unless they’re going to fraudulently claim they’re part of a class which they’re not, and is this morally any better than the original theft method? It’s almost assuredly never used by patients, which seem to be covered under one of the options, as the option to do so is painfully undiscoverable past their typical $30/paper firewalls. Their patchwork hodgepodge of free access is so difficult to not only discern, but one must keep in mind that this is just one of dozens of publishers a researcher must navigate to find the one thing they’re looking for right now (not to mention the thousands of times they need to do this throughout a year, much less a career).
Consider this experiment, which could be a good follow up to the article: is it easier to find and download a paper by title/author/DOI via Sci Hub (a minute) versus through any of the other publishers’ platforms with a university subscription (several minutes) or without a subscription (an hour or more to days)? Just consider the time it would take to dig up every one of 30 references in an average journal article: maybe just a half an hour via Sci Hub versus the days and/or weeks it would take to jump through the multiple hoops to first discover, read about, and then gain access and then download them from the over 14 providers (and this presumes the others provide some type of “access” like Elsevier).
Those who lived through the Napster revolution in music will realize that the dead simplicity of their system is primarily what helped kill the music business compared to the ecosystem that exists now with easy access through the multiple streaming sites (Spotify, Pandora, etc.) or inexpensive paid options like (iTunes). If the publishing business doesn’t want to get completely killed, they’re going to need to create the iTunes of academia. I suspect they’ll have internal bean-counters watching the percentage of the total (now apparently 5%) and will probably only do something before it passes a much larger threshold, though I imagine that they’re really hoping that the number stays stable which signals that they’re not really concerned. They’re far more likely to continue to maintain their status quo practices.
Some of this ease-of-access argument is truly borne out by the statistics of open access papers which are downloaded by Sci Hub–it’s simply easier to both find and download them that way compared to traditional methods; there’s one simple pathway for both discovery and download. Surely the publishers, without colluding, could come up with a standardized method or protocol for finding and accessing their material cheaply and easily?
“Hart-Davidson obtained more than 100 years of biology papers the hard way—legally with the help of the publishers. ‘It took an entire year just to get permission,’ says Thomas Padilla, the MSU librarian who did the negotiating.” John Bohannon in Who’s downloading pirated papers? Everyone
Personally, I use use relatively advanced tools like LibX, which happens to be offered by my institution and which I feel isn’t very well known, and it still takes me longer to find and download a paper than it would via Sci Hub. God forbid if some enterprising hacker were to create a LibX community version for Sci Hub. Come to think of it, why haven’t any of the dozens of publishers built and supported simple tools like LibX which make their content easy to access? If we consider the analogy of academic papers to the introduction of machine guns in World War I, why should modern researchers still be using single-load rifles against an enemy that has access to nuclear weaponry?
My last thought here comes on the heels of the two tweets from Alicia Wise mentioned, but not shown in the article:
— Alicia Wise (@wisealic) March 14, 2016
— Alicia Wise (@wisealic) March 14, 2016
She mentions that the New York Times charges more than Elsevier does for a full subscription. This is tremendously disingenuous as Elsevier is but one of dozens of publishers for which one would have to subscribe to have access to the full panoply of material researchers are typically looking for. Further, Elsevier nor their competitors are making their material as easy to find and access as the New York Times does. Neither do they discount access to the point that they attempt to find the subscription point that their users find financially acceptable. Case in point: while I often read the New York Times, I rarely go over their monthly limit of articles to need any type of paid subscription. Solely because they made me an interesting offer to subscribe for 8 weeks for 99 cents, I took them up on it and renewed that deal for another subsequent 8 weeks. Not finding it worth the full $35/month price point I attempted to cancel. I had to cancel the subscription via phone, but why? The NYT customer rep made me no less than 5 different offers at ever decreasing price points–including the 99 cents for 8 weeks which I had been getting!!–to try to keep my subscription. Elsevier, nor any of their competitors has ever tried (much less so hard) to earn my business. (I’ll further posit that it’s because it’s easier to fleece at the institutional level with bulk negotiation, a model not too dissimilar to the textbook business pressuring professors on textbook adoption rather than trying to sell directly the end consumer–the student, which I’ve written about before.)
(Trigger alert: Apophasis to come) And none of this is to mention the quality control that is (or isn’t) put into the journals or papers themselves. Fortunately one need’t even go further than Bohannon’s other writings like Who’s Afraid of Peer Review? Then there are the hordes of articles on poor research design and misuse of statistical analysis and inability to repeat experiments. Not to give them any ideas, but lately it seems like Elsevier buying the Enquirer and charging $30 per article might not be a bad business decision. Maybe they just don’t want to play second-banana to TMZ?
Interestingly there’s a survey at the end of the article which indicates some additional sources of academic copyright infringement. I do have to wonder how the data for the survey will be used? There’s always the possibility that logged in users will be indicating they’re circumventing copyright and opening themselves up to litigation.
I also found the concept of using the massive data store as a means of applied corpus linguistics for science an entertaining proposition. This type of research could mean great things for science communication in general. I have heard of people attempting to do such meta-analysis to guide the purchase of potential intellectual property for patent trolling as well.
Finally, for those who haven’t done it (ever or recently), I’ll recommend that it’s certainly well worth their time and energy to attend one or more of the many 30-60 minute sessions most academic libraries offer at the beginning of their academic terms to train library users on research tools and methods. You’ll save yourself a huge amount of time.Syndicated copies to:
Syndicated copies to:
“The notion that counting more shapes in the sky will reveal more details of the Big Bang is implied in a central principle of quantum physics known as “unitarity.” Unitarity dictates that the probabilities of all possible quantum states of the universe must add up to one, now and forever; thus, information, which is stored in quantum states, can never be lost — only scrambled. This means that all information about the birth of the cosmos remains encoded in its present state, and the more precisely cosmologists know the latter, the more they can learn about the former.”