👓 The role of information theory in chemistry | Chemistry World

Read The role of information theory in chemistry by Philip Ball (Chemistry World)
Is chemistry an information science after all?

Discussion of some potential interesting directions for application of information theory to chemistry (and biology).

In the 1990s, Nobel laureate Jean-Marie Lehn argued that the principles of spontaneous self-assembly and self-organisation, which he had helped to elucidate in supramolecular chemistry, could give rise to a science of ‘informed matter’ beyond the molecule.

Syndicated copies to:

👓 Algorithmic Information Dynamics: A Computational Approach to Causality and Living Systems From Networks to Cells | Complexity Explorer | Santa Fe Institute

Read Algorithmic Information Dynamics: A Computational Approach to Causality and Living Systems From Networks to Cells (Complexity Explorer | Santa Fe Institute)

About the Course:

Probability and statistics have long helped scientists make sense of data about the natural world — to find meaningful signals in the noise. But classical statistics prove a little threadbare in today’s landscape of large datasets, which are driving new insights in disciplines ranging from biology to ecology to economics. It's as true in biology, with the advent of genome sequencing, as it is in astronomy, with telescope surveys charting the entire sky.

The data have changed. Maybe it's time our data analysis tools did, too.
During this three-month online course, starting June 11th, instructors Hector Zenil and Narsis Kiani will introduce students to concepts from the exciting new field of Algorithm Information Dynamics to search for solutions to fundamental questions about causality — that is, why a particular set of circumstances lead to a particular outcome.

Algorithmic Information Dynamics (or Algorithmic Dynamics in short) is a new type of discrete calculus based on computer programming to study causation by generating mechanistic models to help find first principles of physical phenomena building up the next generation of machine learning.

The course covers key aspects from graph theory and network science, information theory, dynamical systems and algorithmic complexity. It will venture into ongoing research in fundamental science and its applications to behavioral, evolutionary and molecular biology.

Prerequisites:
Students should have basic knowledge of college-level math or physics, though optional sessions will help students with more technical concepts. Basic computer programming skills are also desirable, though not required. The course does not require students to adopt any particular programming language for the Wolfram Language will be mostly used and the instructors will share a lot of code written in this language that student will be able to use, study and exploit for their own purposes.

Course Outline:

  • The course will begin with a conceptual overview of the field.
  • Then it will review foundational theories like basic concepts of statistics and probability, notions of computability and algorithmic complexity, and brief introductions to graph theory and dynamical systems.
  • Finally, the course explores new measures and tools related to reprogramming artificial and biological systems. It will showcase the tools and framework in applications to systems biology, genetic networks and cognition by way of behavioral sequences.
  • Students will be able apply the tools to their own data and problems. The instructors will explain  in detail how to do this, and  will provide all the tools and code to do so.

The course runs 11 June through 03 September 2018.

Tuition is $50 required to get to the course material during the course and a certificate at the end but is is free to watch and if no fee is paid materials will not be available until the course closes. Donations are highly encouraged and appreciated in support for SFI's ComplexityExplorer to continue offering  new courses.

In addition to all course materials tuition includes:

  • Six-month access to the Wolfram|One platform (potentially renewable by other six) worth 150 to 300 USD.
  • Free digital copy of the course textbook to be published by Cambridge University Press.
  • Several gifts will be given away to the top students finishing the course, check the FAQ page for more details.

Best final projects will be invited to expand their results and submit them to the journal Complex Systems, the first journal in the field founded by Stephen Wolfram in 1987.

About the Instructor(s):

Hector Zenil has a PhD in Computer Science from the University of Lille 1 and a PhD in Philosophy and Epistemology from the Pantheon-Sorbonne University of Paris. He co-leads the Algorithmic Dynamics Lab at the Science for Life Laboratory (SciLifeLab), Unit of Computational Medicine, Center for Molecular Medicine at the Karolinska Institute in Stockholm, Sweden. He is also the head of the Algorithmic Nature Group at LABoRES, the Paris-based lab that started the Online Algorithmic Complexity Calculator and the Human Randomness Perception and Generation Project. Previously, he was a Research Associate at the Behavioural and Evolutionary Theory Lab at the Department of Computer Science at the University of Sheffield in the UK before joining the Department of Computer Science, University of Oxford as a faculty member and senior researcher.

Narsis Kiani has a PhD in Mathematics and has been a postdoctoral researcher at Dresden University of Technology and at the University of Heidelberg in Germany. She has been a VINNOVA Marie Curie Fellow and Assistant Professor in Sweden. She co-leads the Algorithmic Dynamics Lab at the Science for Life Laboratory (SciLifeLab), Unit of Computational Medicine, Center for Molecular Medicine at the Karolinska Institute in Stockholm, Sweden. Narsis is also a member of the Algorithmic Nature Group, LABoRES.

Hector and Narsis are the leaders of the Algorithmic Dynamics Lab at the Unit of Computational Medicine at Karolinska Institute.

TA:
Alyssa Adams has a PhD in Physics from Arizona State University and studies what makes living systems different from non-living ones. She currently works at Veda Data Solutions as a data scientist and researcher in social complex systems that are represented by large datasets. She completed an internship at Microsoft Research, Cambridge, UK studying machine learning agents in Minecraft, which is an excellent arena for simple and advanced tasks related to living and social activity. Alyssa is also a member of the Algorithmic Nature Group, LABoRES.

The development of the course and material offered has been supported by: 

  • The Foundational Questions Institute (FQXi)
  • Wolfram Research
  • John Templeton Foundation
  • Santa Fe Institute
  • Swedish Research Council (Vetenskapsrådet)
  • Algorithmic Nature Group, LABoRES for the Natural and Digital Sciences
  • Living Systems Lab, King Abdullah University of Science and Technology.
  • Department of Computer Science, Oxford University
  • Cambridge University Press
  • London Mathematical Society
  • Springer Verlag
  • ItBit for the Natural and Computational Sciences and, of course,
  • the Algorithmic Dynamics lab, Unit of Computational Medicine, SciLifeLab, Center for Molecular Medicine, The Karolinska Institute

Class Introduction:Class IntroductionHow to use Complexity Explorer:How to use Complexity Explorer

Course dates: 11 Jun 2018 9pm PDT to 03 Sep 2018 10pm PDT


Syllabus

  1. A Computational Approach to Causality
  2. A Brief Introduction to Graph Theory and Biological Networks
  3. Elements of Information Theory and Computability
  4. Randomness and Algorithmic Complexity
  5. Dynamical Systems as Models of the World
  6. Practice, Technical Skills and Selected Topics
  7. Algorithmic Information Dynamics and Reprogrammability
  8. Applications to Behavioural, Evolutionary and Molecular Biology

FAQ

Another interesting course from the SFI. Looks like an interesting way to spend the summer.

Syndicated copies to:

Following Michael Levin

Followed Michael Levin (ase.tufts.edu)

Investigating information storage and processing in biological systems

We work on novel ways to understand and control complex pattern formation. We use techniques of molecular genetics, biophysics, and computational modeling to address large-scale control of growth and form. We work in whole frogs and flatworms, and sometimes zebrafish and human tissues in culture. Our projects span regeneration, embryogenesis, cancer, and learning plasticity – all examples of how cellular networks process information. In all of these efforts, our goal is not only to understand the molecular mechanisms necessary for morphogenesis, but also to uncover and exploit the cooperative signaling dynamics that enable complex bodies to build and remodel themselves toward a correct structure. Our major goal is to understand how individual cell behaviors are orchestrated towards appropriate large-scale outcomes despite unpredictable environmental perturbations.

Syndicated copies to:

👓 How Many Genes Do Cells Need? Maybe Almost All of Them | Quanta Magazine

Read How Many Genes Do Cells Need? Maybe Almost All of Them (Quanta Magazine)
An ambitious study in yeast shows that the health of cells depends on the highly intertwined effects of many genes, few of which can be deleted together without consequence.

There could be some interesting data to play with here if available.

I also can’t help but wonder about applying some of Stuart Kauffman’s ideas to something like this. In particular, this sounds very reminiscent to his analogy of what happens when one strings thread randomly among a pile of buttons and the resulting complexity.

Syndicated copies to:

👓 Living Bits: Information and the Origin of Life | PBS

Read

Highlights, Quotes, & Marginalia

our existence can succinctly be described as “information that can replicate itself,” the immediate follow-up question is, “Where did this information come from?”

from an information perspective, only the first step in life is difficult. The rest is just a matter of time.

Through decades of work by legions of scientists, we now know that the process of Darwinian evolution tends to lead to an increase in the information coded in genes. That this must happen on average is not difficult to see. Imagine I start out with a genome encoding n bits of information. In an evolutionary process, mutations occur on the many representatives of this information in a population. The mutations can change the amount of information, or they can leave the information unchanged. If the information changes, it can increase or decrease. But very different fates befall those two different changes. The mutation that caused a decrease in information will generally lead to lower fitness, as the information stored in our genes is used to build the organism and survive. If you know less than your competitors about how to do this, you are unlikely to thrive as well as they do. If, on the other hand, you mutate towards more information—meaning better prediction—you are likely to use that information to have an edge in survival.

There are some plants with huge amounts of DNA compared to their “peers”–perhaps these would be interesting test cases for potential experimentation of this?

Syndicated copies to:

🔖 [1801.06022] Reconstruction Codes for DNA Sequences with Uniform Tandem-Duplication Errors | arXiv

Bookmarked Reconstruction Codes for DNA Sequences with Uniform Tandem-Duplication Errors by Yonatan Yehezkeally and Moshe Schwartz (arxiv.org)
DNA as a data storage medium has several advantages, including far greater data density compared to electronic media. We propose that schemes for data storage in the DNA of living organisms may benefit from studying the reconstruction problem, which is applicable whenever multiple reads of noisy data are available. This strategy is uniquely suited to the medium, which inherently replicates stored data in multiple distinct ways, caused by mutations. We consider noise introduced solely by uniform tandem-duplication, and utilize the relation to constant-weight integer codes in the Manhattan metric. By bounding the intersection of the cross-polytope with hyperplanes, we prove the existence of reconstruction codes with greater capacity than known error-correcting codes, which we can determine analytically for any set of parameters.
Syndicated copies to:

📖 Read pages 19-52 of The Vital Question: Energy, Evolution, and the Origins of Complex Life by Nick Lane

📖 Read Chapter 1: What is Life? pages 19-52 in The Vital Question: Energy, Evolution, and the Origins of Complex Life by Nick Lane (W.W. Norton,
, ISBN: 978-0393088816)

Lane lays out a “brief” history of the 4 billion years of life on Earth. Discusses isotopic fractionation and other evidence that essentially shows a bottleneck between bacteria and archaea (procaryotes) on the one hand and eucaryotes on the other, the latter of which all must have had a single common ancestor based on the genetic profiles we currently see. He suggest that while we should see even more diversity of complex life, we do not, and he hints at the end of the chapter that the reason is energy.

In general, it’s much easier to follow than I anticipated it might be. His writing style is lucid and fluid and he has some lovely prose not often seen in books of this sort. It’s quite a pleasure to read. Additionally he’s doing a very solid job of building an argument in small steps.

I’m watching closely how he’s repeatedly using the word information in his descriptions, and it seems to be a much more universal and colloquial version than the more technical version, but something interesting may come out of it from my philosophical leanings. I can’t wait to get further into the book to see how things develop.

book cover of Nick Lane's The Vital Question
The Vital Question: Energy, Evolution and the Origins of Complex Life by Nick Lane
Syndicated copies to:

📗 Started reading The Vital Question: Energy, Evolution, and the Origins of Complex Life by Nick Lane

📗 Started reading pages 1-18 Introduction: Why is Life the Way it is in The Vital Question: Energy, Evolution, and the Origins of Complex Life by Nick Lane

A quick, but interesting peek into where he intends to go. He lays out some quick background here in the opening. He’s generally a very lucid writer so far. Can’t wait to get in further.

Some may feel like some of the terminology is a hurdle in the opening, so I hope he circles around to define some of his terms a bit better for the audience I suspect he’s trying to reach.

book cover of Nick Lane's The Vital Question
The Vital Question: Energy, Evolution and the Origins of Complex Life by Nick Lane
Syndicated copies to:

📕 Read pages 381-461 of Origin by Dan Brown

📕 Read pages 381-461 to finish reading Origin: A Novel by Dan Brown

This last section got pretty heavy into evolution and touched on ideas of information theory applied to biology and complexity, but didn’t actually mention them. Surprisingly he mentioned Jeremy England by name! He nibbled around the edges of the field to tie up the plot, but there’s some reasonable philosophical questions hiding here in the end of the book that I’ll have to pull into a more lengthy review.

🔖 Self-Organized Resonance during Search of a Diverse Chemical Space

Bookmarked Self-Organized Resonance during Search of a Diverse Chemical Space (Physical Review Letters)
ABSTRACT Recent studies of active matter have stimulated interest in the driven self-assembly of complex structures. Phenomenological modeling of particular examples has yielded insight, but general thermodynamic principles unifying the rich diversity of behaviors observed have been elusive. Here, we study the stochastic search of a toy chemical space by a collection of reacting Brownian particles subject to periodic forcing. We observe the emergence of an adaptive resonance in the system matched to the drive frequency, and show that the increased work absorption by these resonant structures is key to their stabilization. Our findings are consistent with a recently proposed thermodynamic mechanism for far-from-equilibrium self-organization.

Suggested by First Support for a Physics Theory of Life in Quanta Magazine.

Syndicated copies to:

🔖 Spontaneous fine-tuning to environment in many-species chemical reaction networks | PNAS

Bookmarked Spontaneous fine-tuning to environment in many-species chemical reaction networks (Proceedings of the National Academy of Sciences)
Significance A qualitatively more diverse range of possible behaviors emerge in many-particle systems once external drives are allowed to push the system far from equilibrium; nonetheless, general thermodynamic principles governing nonequilibrium pattern formation and self-assembly have remained elusive, despite intense interest from researchers across disciplines. Here, we use the example of a randomly wired driven chemical reaction network to identify a key thermodynamic feature of a complex, driven system that characterizes the “specialness” of its dynamical attractor behavior. We show that the network’s fixed points are biased toward the extremization of external forcing, causing them to become kinetically stabilized in rare corners of chemical space that are either atypically weakly or strongly coupled to external environmental drives. Abstract A chemical mixture that continually absorbs work from its environment may exhibit steady-state chemical concentrations that deviate from their equilibrium values. Such behavior is particularly interesting in a scenario where the environmental work sources are relatively difficult to access, so that only the proper orchestration of many distinct catalytic actors can power the dissipative flux required to maintain a stable, far-from-equilibrium steady state. In this article, we study the dynamics of an in silico chemical network with random connectivity in an environment that makes strong thermodynamic forcing available only to rare combinations of chemical concentrations. We find that the long-time dynamics of such systems are biased toward states that exhibit a fine-tuned extremization of environmental forcing.

Suggested by First Support for a Physics Theory of Life in Quanta Magazine.

Syndicated copies to:

👓 First Support for a Physics Theory of Life | Quanta Magazine

Read First Support for a Physics Theory of Life by Natalie Wolchover (Quanta Magazine)
Take chemistry, add energy, get life. The first tests of Jeremy England’s provocative origin-of-life hypothesis are in, and they appear to show how order can arise from nothing.

Interesting article with some great references I’ll need to delve into and read.


The situation changed in the late 1990s, when the physicists Gavin Crooks and Chris Jarzynski derived “fluctuation theorems” that can be used to quantify how much more often certain physical processes happen than reverse processes. These theorems allow researchers to study how systems evolve — even far from equilibrium.

I want to take a look at these papers as well as several about which the article is directly about.


Any claims that it has to do with biology or the origins of life, he added, are “pure and shameless speculations.”

Some truly harsh words from his former supervisor? Wow!


maybe there’s more that you can get for free

Most of what’s here in this article (and likely in the underlying papers) sounds to me to have been heavily influenced by the writings of W. Loewenstein and S. Kauffman. They’ve laid out some models/ideas that need more rigorous testing and work, and this seems like a reasonable start to the process. The “get for free” phrase itself is very S. Kauffman in my mind. I’m curious how many times it appears in his work?

Syndicated copies to:

📺 A Universal Theory of Life: Math, Art & Information by Sara Walker

Watched A Universal Theory of Life: Math, Art & Information from TEDxASU
Dr. Walker introduces the concept of information, then proposes that information may be a necessity for biological complexity in this thought-provoking talk on the origins of life. Sara is a theoretical physicist and astrobiologist, researching the origins and nature of life. She is particularly interested in addressing the question of whether or not “other laws of physics” might govern life, as first posed by Erwin Schrodinger in his famous book What is life?. She is currently an Assistant Professor in the School of Earth and Space Exploration and Beyond Center for Fundamental Concepts in Science at Arizona State University. She is also Fellow of the ASU -Santa Fe Institute Center for Biosocial Complex Systems, Founder of the astrobiology-themed social website SAGANet.org, and is a member of the Board of Directors of Blue Marble Space. She is active in public engagement in science, with recent appearances on “Through the Wormhole” and NPR’s Science Friday.

Admittedly, she only had a few short minutes, but it would have been nice if she’d started out with a precise definition of information. I suspect the majority of her audience didn’t know the definition with which she’s working and it would have helped focus the talk.

Her description of Speigelman’s Monster was relatively interesting and not very often seen in much of the literature that covers these areas.

I wouldn’t rate this very highly as a TED Talk as it wasn’t as condensed and simplistic as most, nor was it as hyper-focused, but then again condensing this area into 11 minutes is far from simple task. I do love that she’s excited enough about the topic that she almost sounds a little out of breath towards the end.

There’s an excellent Eddington quote I’ve mentioned before that would have been apropos to have opened up her presentation that might have brought things into higher relief given her talk title:

Suppose that we were asked to arrange the following in two categories–

distance, mass, electric force, entropy, beauty, melody.

I think there are the strongest grounds for placing entropy alongside beauty and melody and not with the first three.

Sir Arthur Stanley Eddington, OM, FRS (1882-1944), a British astronomer, physicist, and mathematician
in The Nature of the Physical World, 1927

 

Syndicated copies to:

🔖 Can entropy be defined for and the Second Law applied to the entire universe? by Arieh Ben-Naim | Arxiv

Bookmarked Can entropy be defined for and the Second Law applied to the entire universe? (arXiv)
This article provides answers to the two questions posed in the title. It is argued that, contrary to many statements made in the literature, neither entropy, nor the Second Law may be used for the entire universe. The origin of this misuse of entropy and the second law may be traced back to Clausius himself. More resent (erroneous) justification is also discussed.
Syndicated copies to: