As physicists extend the 19th-century laws of thermodynamics to the quantum realm, they’re rewriting the relationships among energy, entropy and information.Syndicated copies to:

]]>

The ALife conferences are the major meeting of the artificial life research community since 1987. For its 15th edition in 2016, it was held in Latin America for the first time, in the Mayan Riviera, Mexico, from July 4 -8. The special them of the conference: How can the synthetic study of living systems contribute to societies: scientifically, technically, and culturally? The goal of the conference theme is to better understand societies with the purpose of using this understanding for a more efficient management and development of social systems.Syndicated copies to:

]]>

Recent advances suggest that the concept of information might hold the key to unravelling the mystery of life's nature and origin. Fresh insights from a broad and authoritative range of articulate and respected experts focus on the transition from matter to life, and hence reconcile the deep conceptual schism between the way we describe physical and biological systems. A unique cross-disciplinary perspective, drawing on expertise from philosophy, biology, chemistry, physics, and cognitive and social sciences, provides a new way to look at the deepest questions of our existence. This book addresses the role of information in life, and how it can make a difference to what we know about the world. Students, researchers, and all those interested in what life is and how it began will gain insights into the nature of life and its origins that touch on nearly every domain of science. Hardcover: 514 pages; ISBN-10: 1107150531; ISBN-13: 978-1107150539;Syndicated copies to:

]]>

This book considers a relatively new metric in complex systems, transfer entropy, derived from a series of measurements, usually a time series. After a qualitative introduction and a chapter that explains the key ideas from statistics required to understand the text, the authors then present information theory and transfer entropy in depth. A key feature of the approach is the authors' work to show the relationship between information flow and complexity. The later chapters demonstrate information transfer in canonical systems, and applications, for example in neuroscience and in finance. The book will be of value to advanced undergraduate and graduate students and researchers in the areas of computer science, neuroscience, physics, and engineering. ISBN: 978-3-319-43221-2 (Print), 978-3-319-43222-9 (Online)

Want to read; h/t to Joseph Lizier.

]]>

I'm giving a talk at the Stanford Complexity Group this Thursday afternoon, April 20th. If you're around - like in Silicon Valley - please drop by! It will be in Clark S361 at 4 pm. Here's the idea. Everyone likes to say that biology is all about information. There's something true about this - just think about DNA. But what does this insight actually do for us? To figure it out, we need to do some work. Biology is also about things that can make copies of themselves. So it makes sense to figure out how information theory is connected to the 'replicator equation' — a simple model of population dynamics for self-replicating entities. To see the connection, we need to use relative information: the information of one probability distribution relative to another, also known as the Kullback–Leibler divergence. Then everything pops into sharp focus. It turns out that free energy — energy in forms that can actually be used, not just waste heat — is a special case of relative information Since the decrease of free energy is what drives chemical reactions, biochemistry is founded on relative information. But there's a lot more to it than this! Using relative information we can also see evolution as a learning process, fix the problems with Fisher's fundamental theorem of natural selection, and more. So this what I'll talk about! You can see slides of an old version here: http://math.ucr.edu/home/baez/bio_asu/ but my Stanford talk will be videotaped and it'll eventually be here: https://www.youtube.com/user/StanfordComplexity You can already see lots of cool talks at this location! #biology

Wondering if there’s a way I can manufacture a reason to head to Northern California this week…

Syndicated copies to:]]>

Pachter, a computational biologist, returns to CalTech to study the role and function of RNA.

Pachter, a computational biologist and Caltech alumnus, returns to the Institute to study the role and function of RNA.

*Lior Pachter (BS ’94) is Caltech’s new Bren Professor of Computational Biology. Recently, he was elected a fellow of the International Society for Computational Biology, one of the highest honors in the field. We sat down with him to discuss the emerging field of applying computational methods to biology problems, the transition from mathematics to biology, and his return to Pasadena.*

Computational biology is the art of developing and applying computational methods to answer questions in biology, such as studying how proteins fold, identifying genes that are associated with diseases, or inferring human population histories from genetic data. I have interests in both the development of computational methods and in answering specific biology questions, primarily related to the function of RNA, a molecule central to the function of cells. RNA molecules transmit information through their roles as products of DNA transcription and as the precursors to translation to protein; they also act as enzymes catalyzing biochemical reactions. I am interested in understanding these functions of RNA through tools that involve the combination of computational methods with sequencing methods that together allow for high-resolution probing of RNA activity and structure in cells.

During my PhD studies at MIT, I took a course in computational biology. In the course of working on a final project for the class, I got connected to the Human Genome Project—a large-scale endeavor to identify the full DNA sequence of a human genome—and I found the biology and associated math questions very interesting. This led me to change my intended direction of research from algebraic combinatorics to computational biology, and my interests expanded from math to statistics, computer science, and genomics.

It’s not very common. However, many prominent genomics biologists have backgrounds in mathematics, computer science, or statistics. For example, one of my mentors in graduate school was Eric Lander, the director of the Broad Institute of MIT and Harvard, who received a PhD in mathematics and then transitioned to working in biology. His transition, like mine years later, was sparked by the possibilities and challenges of utilizing genome sequencing to understand biology.

While genome sequencing has obviously been useful in revealing the sequences that are involved in coding various aspects of the molecular biology of the cell, it has had a secondary impact that is less obvious at first glance. The low cost and high throughput (the ability to process large volumes of material) of genome sequencing allowed for a more “big-data” approach to biology, so that experiments that previously could only be applied to individual genes could suddenly be applied in parallel to all of the genes in the genome. The design and analysis of such experiments demand much more sophisticated mathematics and statistics than had previously been needed in biology.

A result of the scale of these new experiments is the emergence of very large data sets in biology whose interpretation demands the application of state-of-the-art computer science methods. The problems require interdisciplinary dexterity and involve not only management of large data sets but also the development of novel abstract frameworks for understanding their structure. For example, there’s a new technique called RNA-seq, developed by biologists including Barbara Wold [Caltech’s Bren Professor of Molecular Biology], which involves measuring transcription—the process of copying segments of DNA into RNA—in cells. The RNA-seq technique consists of transforming RNA molecules into DNA sequences that allow the researchers to identify and count the original RNA molecules. The development of this technique required not only novel biochemistry and molecular biology, but also new definitions and ideas for how to think about transcriptomes, which are the sets of all the RNA molecules in a cell. I work on improvements to the assay, as well as the development of the associated statistics, computer science, and mathematics.

I was born in Israel and moved to South Africa when I was two. I lived there until moving to Palo Alto, California, at 15. After high school, I studied mathematics at Caltech and pursued my PhD in applied mathematics at MIT. I spent time at Berkeley as a postdoc before becoming professor of mathematics, molecular and cell biology, and computer science, and I held the Raymond and Beverly Sackler Chair in Computational Biology. I joined the Caltech faculty in early 2017.

It’s a great pleasure. As an undergrad, I made very strong connections with very special people who just had a pure love of science. I’ve always missed the unique culture and atmosphere at Caltech and, returning now as a professor, I can feel the spirit of the Institute—an intense love of science emanating from individuals that is unlike anywhere else. It’s a homecoming of sorts.

Source: A Conversation with Lior Pachter (BS ’94) | Caltech

Syndicated copies to:]]>

The interplay between structural connections and emerging information flow in the human brain remains an open research problem. A recent study observed global patterns of directional information flow in empirical data using the measure of transfer entropy. For higher frequency bands, the overall direction of information flow was from posterior to anterior regions whereas an anterior-to-posterior pattern was observed in lower frequency bands. In this study, we applied a simple Susceptible-Infected-Susceptible (SIS) epidemic spreading model on the human connectome with the aim to reveal the topological properties of the structural network that give rise to these global patterns. We found that direct structural connections induced higher transfer entropy between two brain regions and that transfer entropy decreased with increasing distance between nodes (in terms of hops in the structural network). Applying the SIS model, we were able to confirm the empirically observed opposite information flow patterns and posterior hubs in the structural network seem to play a dominant role in the network dynamics. For small time scales, when these hubs acted as strong receivers of information, the global pattern of information flow was in the posterior-to-anterior direction and in the opposite direction when they were strong senders. Our analysis suggests that these global patterns of directional information flow are the result of an unequal spatial distribution of the structural degree between posterior and anterior regions and their directions seem to be linked to different time scales of the spreading process.Syndicated copies to:

]]>

Epigenetics refers to information transmitted during cell division other than the DNA sequence per se, and it is the language that distinguishes stem cells from somatic cells, one organ from another, and even identical twins from each other. In contrast to the DNA sequence, the epigenome is relatively susceptible to modification by the environment as well as stochastic perturbations over time, adding to phenotypic diversity in the population. Despite its strong ties to the environment, epigenetics has never been well reconciled to evolutionary thinking, and in fact there is now strong evidence against the transmission of so-called “epi-alleles,” i.e. epigenetic modifications that pass through the germline.

However, genetic variants that regulate stochastic fluctuation of gene expression and phenotypes in the offspring appear to be transmitted as an epigenetic or even Lamarckian trait. Furthermore, even the normal process of cellular differentiation from a single cell to a complex organism is not understood well from a mathematical point of view. There is increasingly strong evidence that stem cells are highly heterogeneous and in fact stochasticity is necessary for pluripotency. This process appears to be tightly regulated through the epigenome in development. Moreover, in these biological contexts, “stochasticity” is hardly synonymous with “noise”, which often refers to variation which obscures a “true signal” (e.g., measurement error) or which is structural, as in physics (e.g., quantum noise). In contrast, “stochastic regulation” refers to purposeful, programmed variation; the fluctuations are random but there is no true signal to mask.

This workshop will serve as a forum for scientists and engineers with an interest in computational biology to explore the role of stochasticity in regulation, development and evolution, and its epigenetic basis. Just as thinking about stochasticity was transformative in physics and in some areas of biology, it promises to fundamentally transform modern genetics and help to explain phase transitions such as differentiation and cancer.

This workshop will include a poster session; a request for poster titles will be sent to registered participants in advance of the workshop.

Speaker List:

Adam Arkin (Lawrence Berkeley Laboratory)

Gábor Balázsi (SUNY Stony Brook)

Domitilla Del Vecchio (Massachusetts Institute of Technology)

Michael Elowitz (California Institute of Technology)

Andrew Feinberg (Johns Hopkins University)

Don Geman (Johns Hopkins University)

Anita Göndör (Karolinska Institutet)

John Goutsias (Johns Hopkins University)

Garrett Jenkinson (Johns Hopkins University)

Andre Levchenko (Yale University)

Olgica Milenkovic (University of Illinois)

Johan Paulsson (Harvard University)

Leor Weinberger (University of California, San Francisco (UCSF))

]]>

Open for submission now

Deadline for manuscript submissions: 31 August 2017

A special issue of *Entropy* (ISSN 1099-4300).
## Special Issue Editor

## Special Issue Information

Deadline for manuscript submissions: **31 August 2017**

Dear Colleagues,

Whereas Bayesian inference has now achieved mainstream acceptance and is widely used throughout the sciences, associated ideas such as the principle of maximum entropy (implicit in the work of Gibbs, and developed further by Ed Jaynes and others) have not. There are strong arguments that the principle (and variations, such as maximum relative entropy) is of fundamental importance, but the literature also contains many misguided attempts at applying it, leading to much confusion.

This Special Issue will focus on Bayesian inference and MaxEnt. Some open questions that spring to mind are: Which proposed ways of using entropy (and its maximisation) in inference are legitimate, which are not, and why? Where can we obtain constraints on probability assignments, the input needed by the MaxEnt procedure?

More generally, papers exploring any interesting connections between probabilistic inference and information theory will be considered. Papers presenting high quality applications, or discussing computational methods in these areas, are also welcome.

Dr. Brendon J. Brewer

*Guest Editor*

**Submission**

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. Papers will be published continuously (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are refereed through a peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. *Entropy* is an international peer-reviewed Open Access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1500 CHF (Swiss Francs).

No papers have been published in this special issue yet.

Source: Entropy | Special Issue : Maximum Entropy and Bayesian Methods

]]>This book originated from a series of papers which were published in "Die Naturwissenschaften" in 1977178. Its division into three parts is the reflection of a logic structure, which may be abstracted in the form of three theses:Syndicated copies to:

A. Hypercycles are a principle of natural self-organization allowing an integration and coherent evolution of a set of functionally coupled self-replicative entities.

B. Hypercycles are a novel class of nonlinear reaction networks with unique properties, amenable to a unified mathematical treatment.

C. Hypercycles are able to originate in the mutant distribution of a single Darwinian quasi-species through stabilization of its diverging mutant genes. Once nucleated hypercycles evolve to higher complexity by a process analogous to gene duplication and specialization. In order to outline the meaning of the first statement we may refer to another principle of material self organization, namely to Darwin's principle of natural selection. This principle as we see it today represents the only understood means for creating information, be it the blue print for a complex living organism which evolved from less complex ancestral forms, or be it a meaningful sequence of letters the selection of which can be simulated by evolutionary model games.

]]>

The intimate relation between biology and cognition can be formally examined through statistical models constrained by the asymptotic limit theorems of communication theory, augmented by methods from statistical mechanics and nonequilibrium thermodynamics. Cognition, often involving submodules that act as information sources, is ubiquitous across the living state. Less metabolic free energy is consumed by permitting crosstalk between biological information sources than by isolating them, leading to evolutionary exaptations that assemble shifting, tunable cognitive arrays at multiple scales, and levels of organization to meet dynamic patterns of threat and opportunity. Cognition is thus necessary for life, but it is not sufficient: An organism represents a highly patterned outcome of path-dependent, blind, variation, selection, interaction, and chance extinction in the context of an adequate flow of free energy and an environment fit for development. Complex, interacting cognitive processes within an organism both record and instantiate those evolutionary and developmental trajectories.Syndicated copies to:

]]>

A system responding to a stochastic driving signal can be interpreted as computing, by means of its dynamics, an implicit model of the environmental variables. The system’s state retains information about past environmental fluctuations, and a fraction of this information is predictive of future ones. The remaining nonpredictive information reflects model complexity that does not improve predictive power, and thus represents the ineffectiveness of the model. We expose the fundamental equivalence between this model inefficiency and thermodynamic inefficiency, measured by dissipation. Our results hold arbitrarily far from thermodynamic equilibrium and are applicable to a wide range of systems, including biomolecular machines. They highlight a profound connection between the effective use of information and efficient thermodynamic operation: any system constructed to keep memory about its environment and to operate with maximal energetic efficiency has to be predictive.Syndicated copies to:

]]>

Whether by virtue of being prepared in a slowly relaxing, high-free energy initial condition, or because they are constantly dissipating energy absorbed from a strong external drive, many systems subject to thermal fluctuations are not expected to behave in the way they would at thermal equilibrium. Rather, the probability of finding such a system in a given microscopic arrangement may deviate strongly from the Boltzmann distribution, raising the question of whether thermodynamics still has anything to tell us about which arrangements are the most likely to be observed. In this work, we build on past results governing nonequilibrium thermodynamics and define a generalized Helmholtz free energy that exactly delineates the various factors that quantitatively contribute to the relative probabilities of different outcomes in far-from-equilibrium stochastic dynamics. By applying this expression to the analysis of two examples—namely, a particle hopping in an oscillating energy landscape and a population composed of two types of exponentially growing self-replicators—we illustrate a simple relationship between outcome-likelihood and dissipative history. In closing, we discuss the possible relevance of such a thermodynamic principle for our understanding of self-organization in complex systems, paying particular attention to a possible analogy to the way evolutionary adaptations emerge in living things.Syndicated copies to:

]]>

Notions like meaning, signal, intentionality, are difficult to relate to a physical word. I study a purely physical definition of "meaningful information", from which these notions can be derived. It is inspired by a model recently illustrated by Kolchinsky and Wolpert, and improves on Dretske classic work on the relation between knowledge and information. I discuss what makes a physical process into a "signal".Syndicated copies to:

]]>

It is argued that computing machines inevitably involve devices which perform logical functions that do not have a single-valued inverse. This logical irreversibility is associated with physical irreversibility and requires a minimal heat generation, per machine cycle, typically of the order of kT for each irreversible function. This dissipation serves the purpose of standardizing signals and making them independent of their exact logical history. Two simple, but representative, models of bistable devices are subjected to a more detailed analysis of switching kinetics to yield the relationship between speed and energy dissipation, and to estimate the effects of errors induced by thermal fluctuations.

A classical paper in the history of entropy.

Syndicated copies to:]]>

Understanding the emergence and robustness of life requires accounting for both chemical specificity and statistical generality. We argue that the reverse of a common observation—that life requires a source of free energy to persist—provides an appropriate principle to understand the emergence, organization, and persistence of life on earth. Life, and in particular core biochemistry, has many properties of a relaxation channel that was driven into existence by free energy stresses from the earth's geochemistry. Like lightning or convective storms, the carbon, nitrogen, and phosphorus fluxes through core anabolic pathways make sense as the order parameters in a phase transition from an abiotic to a living state of the geosphere. Interpreting core pathways as order parameters would both explain their stability over billions of years, and perhaps predict the uniqueness of specific optimal chemical pathways.

[1]

H. Morowitz and E. Smith, “Energy flow and the organization of life,” *Complexity*, vol. 13, no. 1. Wiley-Blackwell, pp. 51–59, 2007 [Online]. Available: http://dx.doi.org/10.1002/cplx.20191

]]>

Driven by technological progress, human life expectancy has increased greatly since the nineteenth century. Demographic evidence has revealed an ongoing reduction in old-age mortality and a rise of the maximum age at death, which may gradually extend human longevity. Together with observations that lifespan in various animal species is flexible and can be increased by genetic or pharmaceutical intervention, these results have led to suggestions that longevity may not be subject to strict, species-specific genetic constraints. Here, by analysing global demographic data, we show that improvements in survival with age tend to decline after age 100, and that the age at death of the world’s oldest person has not increased since the 1990s. Our results strongly suggest that the maximum lifespan of humans is fixed and subject to natural constraints.

[1]

X. Dong, B. Milholland, and J. Vijg, “Evidence for a limit to human lifespan.,” *Nature*, vol. 538, no. 7624, pp. 257–259, Oct. 2016. [PubMed]

]]>

Almost 40 years ago, Leonard Hayflick discovered that cultured normal human cells have limited capacity to divide, after which they become senescent — a phenomenon now known as the ‘Hayflick limit’. Hayflick's findings were strongly challenged at the time, and continue to be questioned in a few circles, but his achievements have enabled others to make considerable progress towards understanding and manipulating the molecular mechanisms of ageing.

[1]

J. Shay and W. Wright, “Hayflick, his limit, and cellular ageing.,” *Nat Rev Mol Cell Biol*, vol. 1, no. 1, pp. 72–6, Oct. 2000. [PubMed]

]]>

Biomolecular systems like molecular motors or pumps, transcription and translation machinery, and other enzymatic reactions, can be described as Markov processes on a suitable network. We show quite generally that, in a steady state, the dispersion of observables, like the number of consumed or produced molecules or the number of steps of a motor, is constrained by the thermodynamic cost of generating it. An uncertainty ε requires at least a cost of 2k_B T/ε^2 independent of the time required to generate the output.

[1]

A. C. Barato and U. Seifert, “Thermodynamic Uncertainty Relation for Biomolecular Processes,” *Physical Review Letters*, vol. 114, no. 15. American Physical Society (APS), 15-Apr-2015 [Online]. Available: http://dx.doi.org/10.1103/PhysRevLett.114.158101 [Source]

]]>

Recent advances in fields ranging from cosmology to computer science have hinted at a possible deep connection between intelligence and entropy maximization, but no formal physical relationship between them has yet been established. Here, we explicitly propose a first step toward such a relationship in the form of a causal generalization of entropic forces that we find can cause two defining behaviors of the human “cognitive niche”—tool use and social cooperation—to spontaneously emerge in simple physical systems. Our results suggest a potentially general thermodynamic model of adaptive behavior as a nonequilibrium process in open systems.

[1]

A. D. Wissner-Gross and C. E. Freer, “Causal Entropic Forces,” *Physical Review Letters*, vol. 110, no. 16. American Physical Society (APS), 19-Apr-2013 [Online]. Available: http://dx.doi.org/10.1103/PhysRevLett.110.168702 [Source]

Syndicated copies to:

]]>

Life was long thought to obey its own set of rules. But as simple systems show signs of lifelike behavior, scientists are arguing about whether this apparent complexity is all a consequence of thermodynamics.

This is a nice little general interest article by Philip Ball that does a relatively good job of covering several of my favorite topics (information theory, biology, complexity) for the layperson. While it stays relatively basic, it links to a handful of really great references, many of which I’ve already read, though several appear to be new to me. [1][2][3][4][5][6][7][8][9][10]

While Ball has a broad area of interests and coverage in his work, he’s certainly one of the best journalists working in this subarea of interests today. I highly recommend his work to those who find this area interesting.

[1]

E. Mayr, *What Makes Biology Unique?* Cambridge University Press, 2004.

[2]

A. Wissner-Gross and C. Freer, “Causal entropic forces.,” *Phys Rev Lett*, vol. 110, no. 16, p. 168702, Apr. 2013. [PubMed]

[3]

A. Barato and U. Seifert, “Thermodynamic uncertainty relation for biomolecular processes.,” *Phys Rev Lett*, vol. 114, no. 15, p. 158101, Apr. 2015. [PubMed]

[4]

J. Shay and W. Wright, “Hayflick, his limit, and cellular ageing.,” *Nat Rev Mol Cell Biol*, vol. 1, no. 1, pp. 72–6, Oct. 2000. [PubMed]

[5]

X. Dong, B. Milholland, and J. Vijg, “Evidence for a limit to human lifespan,” *Nature*, vol. 538, no. 7624. Springer Nature, pp. 257–259, 05-Oct-2016 [Online]. Available: http://dx.doi.org/10.1038/nature19793

[6]

H. Morowitz and E. Smith, “Energy Flow and the Organization of Life,” *Santa Fe Institute*, 07-Aug-2006. [Online]. Available: http://samoa.santafe.edu/media/workingpapers/06-08-029.pdf. [Accessed: 03-Feb-2017]

[7]

R. Landauer, “Irreversibility and Heat Generation in the Computing Process,” *IBM Journal of Research and Development*, vol. 5, no. 3. IBM, pp. 183–191, Jul-1961 [Online]. Available: http://dx.doi.org/10.1147/rd.53.0183

[8]

C. Rovelli, “Meaning = Information + Evolution,” *arXiv*, Nov. 2006 [Online]. Available: https://arxiv.org/abs/1611.02420

[9]

N. Perunov, R. A. Marsland, and J. L. England, “Statistical Physics of Adaptation,” *Physical Review X*, vol. 6, no. 2. American Physical Society (APS), 16-Jun-2016 [Online]. Available: http://dx.doi.org/10.1103/PhysRevX.6.021036 [Source]

[10]

S. Still, D. A. Sivak, A. J. Bell, and G. E. Crooks, “Thermodynamics of Prediction,” *Physical Review Letters*, vol. 109, no. 12. American Physical Society (APS), 19-Sep-2012 [Online]. Available: http://dx.doi.org/10.1103/PhysRevLett.109.120604 [Source]

]]>

We discuss properties of the "beamsplitter addition" operation, which provides a non-standard scaled convolution of random variables supported on the non-negative integers. We give a simple expression for the action of beamsplitter addition using generating functions. We use this to give a self-contained and purely classical proof of a heat equation and de Bruijn identity, satisfied when one of the variables is geometric.Syndicated copies to:

]]>

Abstract: Despite the obvious advantage of simple life forms capable of fast replication, different levels of cognitive complexity have been achieved by living systems in terms of their potential to cope with environmental uncertainty. Against the inevitable cost associated to detecting environmental cues and responding to them in adaptive ways, we conjecture that the potential for predicting the environment can overcome the expenses associated to maintaining costly, complex structures. We present a minimal formal model grounded in information theory and selection, in which successive generations of agents are mapped into transmitters and receivers of a coded message. Our agents are guessing machines and their capacity to deal with environments of different complexity defines the conditions to sustain more complex agents.Syndicated copies to:

]]>

100 years after Smoluchowski introduces his approach to stochastic processes, they are now at the basis of mathematical and physical modeling in cellular biology: they are used for example to analyse and to extract features from large number (tens of thousands) of single molecular trajectories or to study the diffusive motion of molecules, proteins or receptors. Stochastic modeling is a new step in large data analysis that serves extracting cell biology concepts. We review here the Smoluchowski's approach to stochastic processes and provide several applications for coarse-graining diffusion, studying polymer models for understanding nuclear organization and finally, we discuss the stochastic jump dynamics of telomeres across cell division and stochastic gene regulation.

[1]

D. Holcman and Z. Schuss, “100 years after Smoluchowski: stochastic processes in cell biology,” *arXiv*, 26-Dec-2016. [Online]. Available: https://arxiv.org/abs/1612.08381. [Accessed: 03-Jan-2017]

]]>

Paleoclimate records are extremely rich sources of information about the past history of the Earth system. We take an information-theoretic approach to analyzing data from the WAIS Divide ice core, the longest continuous and highest-resolution water isotope record yet recovered from Antarctica. We use weighted permutation entropy to calculate the Shannon entropy rate from these isotope measurements, which are proxies for a number of different climate variables, including the temperature at the time of deposition of the corresponding layer of the core. We find that the rate of information production in these measurements reveals issues with analysis instruments, even when those issues leave no visible traces in the raw data. These entropy calculations also allow us to identify a number of intervals in the data that may be of direct relevance to paleoclimate interpretation, and to form new conjectures about what is happening in those intervals—including periods of abrupt climate change.

Saw reference in *Predicting unpredictability: Information theory offers new way to read ice cores [1]*

[1]

“Predicting unpredictability: Information theory offers new way to read ice cores,” *Phys.org*. [Online]. Available: http://phys.org/news/2016-12-unpredictability-theory-ice-cores.html. [Accessed: 12-Dec-2016]

]]>

Remarkable progress of quantum information theory (QIT) allowed to formulate mathematical theorems for conditions that data-transmitting or data-processing occurs with a non-negative entropy gain. However, relation of these results formulated in terms of entropy gain in quantum channels to temporal evolution of real physical systems is not thoroughly understood. Here we build on the mathematical formalism provided by QIT to formulate the quantum H-theorem in terms of physical observables. We discuss the manifestation of the second law of thermodynamics in quantum physics and uncover special situations where the second law can be violated. We further demonstrate that the typical evolution of energy-isolated quantum systems occurs with non-diminishing entropy. [1]

[1]

G. B. Lesovik, A. V. Lebedev, I. A. Sadovskyy, M. V. Suslov, and V. M. Vinokur, “H-theorem in quantum physics,” *Scientific Reports*, vol. 6. Springer Nature, p. 32815, 12-Sep-2016 [Online]. Available: http://dx.doi.org/10.1038/srep32815

]]>

“Quantum-based demons” sound like they'd be at home in 'Stranger Things.'Syndicated copies to:

]]>

The Santa Fe Institute, in New Mexico, is a place for studying complex systems. I’ve never been there! Next week I’ll go there to give a colloquium on network theory, and also to participate in this workshop.

I just found out about this from John Carlos Baez and wish I could go! How have I not managed to have heard about it?

Syndicated copies to:## Stastical Physics, Information Processing, and Biology

## Workshop

November 16, 2016 – November 18, 2016

9:00 AM

Noyce Conference Room

Abstract.

This workshop will address a fundamental question in theoretical biology: Does the relationship between statistical physics and the need of biological systems to process information underpin some of their deepest features? It recognizes that a core feature of biological systems is that they acquire, store and process information (i.e., perform computation). However to manipulate information in this way they require a steady flux of free energy from their environments. These two, inter-related attributes of biological systems are often taken for granted; they are not part of standard analyses of either the homeostasis or the evolution of biological systems. In this workshop we aim to fill in this major gap in our understanding of biological systems, by gaining deeper insight in the relation between the need for biological systems to process information and the free energy they need to pay for that processing.The goal of this workshop is to address these issues by focusing on a set three specific question:

- How has the fraction of free energy flux on earth that is used by biological computation changed with time?;
- What is the free energy cost of biological computation / function?;
- What is the free energy cost of the evolution of biological computation / function.
In all of these cases we are interested in the fundamental limits that the laws of physics impose on various aspects of living systems as expressed by these three questions.

Purpose: Research Collaboration

SFI Host: David Krakauer, Michael Lachmann, Manfred Laubichler, Peter Stadler, and David Wolpert

]]>

This is the signal for the second.

How can you * not *follow this twitter account?!

Now I’m waiting for a Shannon bot and a Weiner bot. Maybe a John McCarthy bot would be apropos too?!

Syndicated copies to:]]>

Hundreds of researchers in a collaborative project called "It from Qubit" say space and time may spring up from the quantum entanglement of tiny bits of information.Syndicated copies to:

]]>

Learn about quantum computation and quantum information in this advanced graduate level course from MIT.

Syndicated copies to:## About this course

Already know something about quantum mechanics, quantum bits and quantum logic gates, but want to design new quantum algorithms, and explore multi-party quantum protocols? This is the course for you!

In this advanced graduate physics course on quantum computation and quantum information, we will cover:

- The formalism of quantum errors (density matrices, operator sum representations)
- Quantum error correction codes (stabilizers, graph states)
- Fault-tolerant quantum computation (normalizers, Clifford group operations, the Gottesman-Knill Theorem)
- Models of quantum computation (teleportation, cluster, measurement-based)
- Quantum Fourier transform-based algorithms (factoring, simulation)
- Quantum communication (noiseless and noisy coding)
- Quantum protocols (games, communication complexity)
Research problem ideas are presented along the journey.

## What you’ll learn

- Formalisms for describing errors in quantum states and systems
- Quantum error correction theory
- Fault-tolerant quantum procedure constructions
- Models of quantum computation beyond gates
- Structures of exponentially-fast quantum algorithms
- Multi-party quantum communication protocols
## Meet the instructor

Isaac Chuang Professor of Electrical Engineering and Computer Science, and Professor of Physics MIT

]]>

Logical probability theory was developed as a quantitative measure based on Boole's logic of subsets. But information theory was developed into a mature theory by Claude Shannon with no such connection to logic. But a recent development in logic changes this situation. In category theory, the notion of a subset is dual to the notion of a quotient set or partition, and recently the logic of partitions has been developed in a parallel relationship to the Boolean logic of subsets (subset logic is usually mis-specified as the special case of propositional logic). What then is the quantitative measure based on partition logic in the same sense that logical probability theory is based on subset logic? It is a measure of information that is named "logical entropy" in view of that logical basis. This paper develops the notion of logical entropy and the basic notions of the resulting logical information theory. Then an extensive comparison is made with the corresponding notions based on Shannon entropy.

Ellerman is visiting at UC Riverside at the moment. Given the information theory and category theory overlap, I’m curious if he’s working with John Carlos Baez, or what Baez is aware of this.

Based on a cursory look of his website(s), I’m going to have to start following more of this work.

Syndicated copies to:]]>

If you’re not following him *everywhere (?)* yet, start with some of the sites below (or let me know if I’ve missed anything).

His most recent paper on arXiv:

Low Algorithmic Complexity Entropy-deceiving Graphs | .pdf

A common practice in the estimation of the complexity of objects, in particular of graphs, is to rely on graph- and information-theoretic measures. Here, using integer sequences with properties such as Borel normality, we explain how these measures are not independent of the way in which a single object, such a graph, can be described. From descriptions that can reconstruct the same graph and are therefore essentially translations of the same description, we will see that not only is it necessary to pre-select a feature of interest where there is one when applying a computable measure such as Shannon Entropy, and to make an arbitrary selection where there is not, but that more general properties, such as the causal likeliness of a graph as a measure (opposed to randomness), can be largely misrepresented by computable measures such as Entropy and Entropy rate. We introduce recursive and non-recursive (uncomputable) graphs and graph constructions based on integer sequences, whose different lossless descriptions have disparate Entropy values, thereby enabling the study and exploration of a measure’s range of applications and demonstrating the weaknesses of computable measures of complexity.

Subjects: Information Theory (cs.IT); Computational Complexity (cs.CC); Combinatorics (math.CO)

Cite as: arXiv:1608.05972 [cs.IT] (or arXiv:1608.05972v4 [cs.IT]

Yesterday he also posted two new introductory videos to his YouTube channel. There’s nothing overly technical here, but they’re nice short productions that introduce some of his work. (I wish more scientists did communication like this.) I’m hoping he’ll post them to his blog and write a bit more there in the future as well.

Relevant literature:

- A Decomposition Method for Global Evaluation of Shannon Entropy and Local Estimations of Algorithmic Complexity by Hector Zenil, Fernando Soler-Toscano, Narsis A. Kiani, Santiago Hernández-Orozco, Antonio Rueda-Toicen
- Calculating Kolmogorov Complexity from the Output Frequency Distributions of Small Turing Machines by F. Soler-Toscano, H. Zenil, J.-P. Delahaye and N. Gauvrit; PLoS ONE 9(5): e96223, 2014.
- Numerical Evaluation of Algorithmic Complexity for Short Strings: A Glance into the Innermost Structure of Randomness by Jean-Paul Delahaye, Hector Zenil; Applied Mathematics and Computation 219, pp. 63-77, 2012.

Relevant literature:

Cross-boundary Behavioural Reprogrammability Reveals Evidence of Pervasive Turing Universality by Jürgen Riedel, Hector Zenil

Preprint available at http://arxiv.org/abs/1510.01671

*Ed.: 9/7/16: Updated videos with links to relevant literature*

]]>

The book is a collection of papers written by a selection of eminent authors from around the world in honour of Gregory Chaitin s 60th birthday. This is a unique volume including technical contributions, philosophical papers and essays. Hardcover: 468 pages; Publisher: World Scientific Publishing Company (October 18, 2007); ISBN: 9789812770820]]>

@ChrisAldrich Why is Norbert Wiener the illustration for this?

Maybe I should have used Claude Shannon instead?

Syndicated copies to:]]>

Download a pre-publication version of the book which will be published by Cambridge University Press. The book arises from notes of courses taught at the second year graduate level at the University of Minnesota and is suitable to accompany study at that level.

Syndicated copies to:

]]>

Syndicated copies to:

]]>

Jeremy England, a 31-year-old physicist at MIT, thinks he has found the underlying physics driving the origin and evolution of life.

References:

- Jeremy L. England Lab
- Talks
*Statistical physics of self-replication*, Jeremy L. England; J. Chem. Phys. 139, 121923 (2013); doi: 10.1063/1.4818538*Statistical Physics of Adaptation*, Nikolai Perunov, Robert Marsland, and Jeremy England, arXiv, December 8, 2014*Entropy production fluctuation theorem and the nonequilibrium work relation for free energy differences*, Gavin E. Crooks,*arXiv*, February 1, 2008*Life as a manifestation of the second law of thermodynamics*, E.D. Schneider, J.J. Kay, doi:10.1016/0895-7177(94)90188-0, Mathematical and Computer Modelling, Volume 19, Issues 6–8, March–April 1994, Pages 25-48

`[ hypothesis user = 'chrisaldrich' tags = 'EnglandQM']`

]]>

Methods originally developed in Information Theory have found wide applicability in computational neuroscience. Beyond these original methods there is a need to develop novel tools and approaches that are driven by problems arising in neuroscience. A number of researchers in computational/systems neuroscience and in information/communication theory are investigating problems of information representation and processing. While the goals are often the same, these researchers bring different perspectives and points of view to a common set of neuroscience problems. Often they participate in different fora and their interaction is limited. The goal of the workshop is to bring some of these researchers together to discuss challenges posed by neuroscience and to exchange ideas and present their latest work. The workshop is targeted towards computational and systems neuroscientists with interest in methods of information theory as well as information/communication theorists with interest in neuroscience.Syndicated copies to:

]]>

A BIRS / Casa Matemática Oaxaca Workshop arriving in Oaxaca, Mexico Sunday, July 31 and departing Friday August 5, 2016

Syndicated copies to:Evolutionary biology is a rapidly changing field, confronted to many societal problems of increasing importance: impact of global changes, emerging epidemics, antibiotic resistant bacteria… As a consequence, a number of new problematics have appeared over the last decade, challenging the existing mathematical models. There exists thus a demand in the biology community for new mathematical models allowing a qualitative or quantitative description of complex evolution problems. In particular, in the societal problems mentioned above, evolution is often interacting with phenomena of a different nature: interaction with other organisms, spatial dynamics, age structure, invasion processes, time/space heterogeneous environment… The development of mathematical models able to deal with those complex interactions is an ambitious task. Evolutionary biology is interested in the evolution of species. This process is a combination of several phenomena, some occurring at the individual level (e.g. mutations), others at the level of the entire population (competition for resources), often consisting of a very large number of individuals. the presence of very different scales is indeed at the core of theoretical evolutionary biology, and at the origin of many of the difficulties that biologists are facing. The development of new mathematical models thus requires a joint work of three different communities of researchers: specialists of partial differential equations, specialists of probability theory, and theoretical biologists. The goal of this workshop is to gather researchers from each of these communities, currently working on close problematics. Those communities have usually few interactions, and this meeting would give them the opportunity to discuss and work around a few biological thematics that are especially challenging mathematically, and play a crucial role for biological applications.

The role of a spatial structure in models for evolution: The introduction of a spatial structure in evolutionary biology models is often challenging. It is however well known that local adaptation is frequent in nature: field data show that the phenotypes of a given species change considerably across its range. The spatial dynamics of a population can also have a deep impact on its evolution. Assessing e.g. the impact of global changes on species requires the development of robust mathematical models for spatially structured populations.

The first type of models used by theoretical biologists for this type of problems are IBM (Individual Based Models), which describe the evolution of a finite number of individuals, characterized by their position and a phenotype. The mathematical analysis of IBM in spatially homogeneous situations has provided several methods that have been successful in the theoretical biology community (see the theory of Adaptive Dynamics). On the contrary, very few results exist so far on the qualitative properties of such models for spatially structured populations.

The second class of mathematical approach for this type of problem is based on ”infinite dimensional” reaction-diffusion: the population is structured by a continuous phenotypic trait, that affects its ability to disperse (diffusion), or to reproduce (reaction). This type of model can be obtained as a large population limit of IBM. The main difficulty of these models (in the simpler case of asexual populations) is the term modeling the competition from resources, that appears as a non local competition term. This term prevents the use of classical reaction diffusion tools such as the comparison principle and sliding methods. Recently, promising progress has been made, based on tools from elliptic equations and/or Hamilton-Jacobi equations. The effects of small populations can however not be observed on such models. The extension of these models and methods to include these effects will be discussed during the workshop.

Eco-evolution models for sexual populations:An essential question already stated by Darwin and Fisher and which stays for the moment without answer (although it continues to intrigue the evolutionary biologists) is: ”Why does sexual reproduction maintain?” Indeed this reproduction way is very costly since it implies a large number of gametes, the mating and the choice of a compatible partner. During the meiosis phasis, half of the genetical information is lost. Moreover, the males have to be fed and during the sexual mating, individual are easy preys for predators. A partial answer is that recombination plays a main role by better eliminating the deleterious mutations and by increasing the diversity. Nevertheless, this theory is not completely satisfying and many researches are devoted to understanding evolution of sexual populations and comparison between asexual and sexual reproduction. Several models exist to model the influence of sexual reproduction on evolving species. The difficulty compared to asexual populations is that a detailed description of the genetic basis of phenotypes is required, and in particular include recombinations. For sexual populations, recombination plays a main role and it is essential to understand. All models require strong biological simplifications, the development of relevant mathematical methods for such mechanisms then requires a joint work of mathematicians and biologists. This workshop will be an opportunity to set up such collaborations.

The first type of model considers a small number of diploid loci (typically one locus and two alleles), while the rest of the genome is considered as fixed. One can then define the fitness of every combination of alleles. While allowing the modeling of specific sexual effects (such as dominant/recessive alleles), this approach neglects the rest of the genome (and it is known that phenotypes are typically influenced by a large number of loci). An opposite approach is to consider a large number of loci, each locus having a small and additive impact on the considered phenotype. This approach then neglects many microscopic phenomena (epistasis, dominant/recessive alleles…), but allows the derivation of a deterministic model, called the infinitesimal model, in the case of a large population. The construction of a good mathematical framework for intermediate situation would be an important step forward.

The evolution of recombination and sex is very sensitive to the interaction between several evolutionary forces (selection, migration, genetic drift…). Modeling these interactions is particularly challenging and our understanding of the recombination evolution is often limited by strong assumptions regarding demography, the relative strength of these different evolutionary forces, the lack of spatial structure… The development of a more general theoretical framework based on new mathematical developments would be particularly valuable.

Another problem, that has received little attention so far and is worth addressing, is the modeling of the genetic material exchanges in asexual population. This phenomena is frequent in micro-organisms : horizontal gene transfers in bacteria, reassortment or recombination in viruses. These phenomena share some features with sexual reproduction. It would be interesting to see if the effect of this phenomena can be seen as a perturbation of existing asexual models. This would in particular be interesting in spatially structured populations (e.g. viral epidemics), since the the mathematical analysis of spatially structured asexual populations is improving rapidly.

Modeling in evolutionary epidemiology: Mathematical epidemiology has been developing since more than a century ago. Yet, the integration of population genetics phenomena to epidemiology is relatively recent. Microbial pathogens (bacteria and viruses) are particularly interesting organisms because their short generation times and large mutation rates allow them to adapt relatively fast to changing environments. As a consequence, ecological (demography) and evolutionary (population genetics) processes often occur at the same pace. This raises many interesting problems.

A first challenge is the modeling of the spatial dynamics of an epidemics. The parasites can evolve during the epidemics of a new host population, either to adapt to a heterogeneous environment, or because it will itself modify the environment as it invades. The applications of such studies are numerous: antibiotic management, agriculture… An aspect of this problem for which our workshop can bring a significant contribution (thanks to the diversity of its participants) is the evolution of the pathogen diversity. During the large expansion produced by an epidemics, there is a loss of diversity in the invading parasites, since most pathogens originate from a few parents. The development of mathematical models for those phenomena is challenging: only a small number of pathogens are present ahead of the epidemic front, while the number of parasites rapidly become very large after the infection. The interaction between a stochastic micro scale and a deterministic macro scale is apparent here, and deserves a rigorous mathematical analysis.

Another interesting phenomena is the effect of a sudden change of the environment on a population of pathogens. Examples of such situations are for instance the antibiotic treatment of an infected patients, or the transmission of a parasite to a new host species (transmission of the avian influenza to human beings, for instance). Related experiments are relatively easy to perform, and called evolutionary rescue experiments. So far, this question has received limited attention from the mathematical community. The key is to estimate the probability that a mutant well adapted to the new environment existed in the original population, or will appear soon after the environmental change. Interactions between biologists specialists of those questions and mathematicians should lead to new mathematical problems.

]]>