🔖 The Demon in the Machine by Paul Davies | Allen Lane (2018)

Bookmarked The Demon in the Machine by Paul Davies (Allen Lane)

How does life create order from chaos? And just what is life, anyway? Leading physicist Paul Davies argues that to find the answers, we must first answer a deeper question: 'What is information?' To understand the origins and nature of life, Davies proposes a radical vision of biology which sees the underpinnings of life as similar to circuits and electronics, arguing that life as we know it should really be considered a phenomenon of information storage. In an extraordinary deep dive into the real mechanics of what we take for granted, Davies reveals how biological processes, from photosynthesis to birds' navigation abilities, rely on quantum mechanics, and explores whether quantum physics could prove to be the secret key of all life on Earth. Lively and accessible, Demons in the Machine boils down intricate interdisciplinary developments to take readers on an eye-opening journey towards the ultimate goal of science: unifying all theories of the living and the non-living, so that humanity can at last understand its place in the universe.

book cover of The Demon in the Machine by Paul Davies

Found via review.

👓 ‘I predict a great revolution’: inside the struggle to define life | the Guardian

Read 'I predict a great revolution': inside the struggle to define life by Ian Sample (the Guardian)
Paul Davies thinks combining physics and biology will reveal a pattern of information management
hat tip: Philip Ball

Reply to Andy Gonzalez about NIMBIOS Workshop

Replied to a tweet by Andy GonzalezAndy Gonzalez (Twitter)
Andrew Eckford et al. hosted a related conference a few months prior at BIRS which also has some great videos:

Biological and Bio-Inspired Information Theory(14w5170)

Perhaps it’s time for a follow up conference?

🔖 Theory Of Self Reproducing Automata by John Von Neumann, Arthur W. Burks (Editor) | 9780252727337

Bookmarked Theory Of Self Reproducing Automata by John von Neumann (University of Illinois Press)
Waiting for the price of some of these to drop.

Digital copy available on Archive.org.

🔖 Network medicine: a network-based approach to human disease | Albert-László Barabási, Natali Gulbahce & Joseph Loscalzo | Nature Reviews Genetics

Bookmarked Network medicine: a network-based approach to human disease by Albert-László Barabási, Natali Gulbahce & Joseph Loscalzo (Nature Reviews Genetics | volume 12, pages 56–68 (2011))

Abstract
Given the functional interdependencies between the molecular components in a human cell, a disease is rarely a consequence of an abnormality in a single gene, but reflects the perturbations of the complex intracellular and intercellular network that links tissue and organ systems. The emerging tools of network medicine offer a platform to explore systematically not only the molecular complexity of a particular disease, leading to the identification of disease modules and pathways, but also the molecular relationships among apparently distinct (patho)phenotypes. Advances in this direction are essential for identifying new disease genes, for uncovering the biological significance of disease-associated mutations identified by genome-wide association studies and full-genome sequencing, and for identifying drug targets and biomarkers for complex diseases.

Key points

  • A disease phenotype is rarely a consequence of an abnormality in a single effector gene product, but reflects various pathobiological processes that interact in a complex network.
  • Here we present an overview of the organizing principles that govern cellular networks and the implications of these principles for understanding disease. Network-based approaches have potential biological and clinical applications, from the identification of disease genes to better drug targets.
  • Whereas essential genes tend to be associated with hubs, or highly connected proteins, disease genes tend to segregate at the network's functional periphery, avoiding hubs.
  • Disease genes have a high propensity to interact with each other, forming disease modules. The identification of these disease modules can help us to identify disease pathways and predict other disease genes.
  • The highly interconnected nature of the interactome means that, at the molecular level, it is difficult to consider diseases as being independent of one another. The mapping of network-based dependencies between pathophenotypes has culminated in the concept of the diseasome, which represents disease maps whose nodes are diseases and whose links represent various molecular relationships between the disease-associated cellular components.
  • Diseases linked at the molecular level tend to show detectable comorbidity.
  • Network medicine has important applications to drug design, leading to the emergence of network pharmacology, and also in disease classification.
h/t Disconnected, fragmented, or united? a trans-disciplinary review of network science by César A. Hidalgo (Applied Network Science | SpringerLink)

🔖 A Dynamic Network Approach for the Study of Human Phenotypes | PLOS Computational Biology

Bookmarked A Dynamic Network Approach for the Study of Human Phenotypes by César A. Hidalgo , Nicholas Blumm, Albert-László Barabási, Nicholas A. Christakis (PLOS Computational Biology)
Author Summary: To help the understanding of physiological failures, diseases are defined as specific sets of phenotypes affecting one or several physiological systems. Yet, the complexity of biological systems implies that our working definitions of diseases are careful discretizations of a complex phenotypic space. To reconcile the discrete nature of diseases with the complexity of biological organisms, we need to understand how diseases are connected, as connections between these different discrete categories can be informative about the mechanisms causing physiological failures. Here we introduce the Phenotypic Disease Network (PDN) as a map summarizing phenotypic connections between diseases and show that diseases progress preferentially along the links of this map. Furthermore, we show that this progression is different for patients with different genders and racial backgrounds and that patients affected by diseases that are connected to many other diseases in the PDN tend to die sooner than those affected by less connected diseases. Additionally, we have created a queryable online database (http://hudine.neu.edu/) of the 18 different datasets generated from the more than 31 million patients in this study. The disease associations can be explored online or downloaded in bulk.
h/t Disconnected, fragmented, or united? a trans-disciplinary review of network science by César A. Hidalgo (Applied Network Science | SpringerLink)

🔖 Proceedings for ALIFE 2018: The 2018 Conference on Artificial Life

Bookmarked Proceedings ALIFE 2018: The 2018 Conference on Artificial Life by Takashi Ikegami, Nathaniel Virgo, Olaf Witkowski, Mizuki Oka, Reiji Suzuki, Hiroyuki Iizuka (eds.) (MIT Press Journals)
This volume presents the proceedings of ALIFE 2018, the 2018 Conference on Artificial Life, held July 23rd-27th. It took place in Tokyo, Japan (http://2018.alife.org). The ALIFE and ECAL conferences have been the major meeting of the artificial life (ALife) research community since 1987 and 1991, respectively. As a Hybrid of the European Conference on Artificial Life (ECAL) and the International Conference on the Synthesis and Simulation of Living Systems (ALIFE), the 2018 Conference on Artificial Life (ALIFE 2018) will take place outside both Europe and the US, in Tokyo, Japan.

🔖 CNS*2018 Workshop on Methods of Information Theory in Computational Neuroscience

Read Information Theory in Computational Neuroscience Workshop (CNS*2018) by Joseph Lizier (lizier.me)
Methods originally developed in Information Theory have found wide applicability in computational neuroscience. Beyond these original methods there is a need to develop novel tools and approaches that are driven by problems arising in neuroscience. A number of researchers in computational/systems neuroscience and in information/communication theory are investigating problems of information representation and processing. While the goals are often the same, these researchers bring different perspectives and points of view to a common set of neuroscience problems. Often they participate in different fora and their interaction is limited. The goal of the workshop is to bring some of these researchers together to discuss challenges posed by neuroscience and to exchange ideas and present their latest work. The workshop is targeted towards computational and systems neuroscientists with interest in methods of information theory as well as information/communication theorists with interest in neuroscience.

👓 The role of information theory in chemistry | Chemistry World

Read The role of information theory in chemistry by Philip Ball (Chemistry World)
Is chemistry an information science after all?
Discussion of some potential interesting directions for application of information theory to chemistry (and biology).

In the 1990s, Nobel laureate Jean-Marie Lehn argued that the principles of spontaneous self-assembly and self-organisation, which he had helped to elucidate in supramolecular chemistry, could give rise to a science of ‘informed matter’ beyond the molecule.

👓 Algorithmic Information Dynamics: A Computational Approach to Causality and Living Systems From Networks to Cells | Complexity Explorer | Santa Fe Institute

Read Algorithmic Information Dynamics: A Computational Approach to Causality and Living Systems From Networks to Cells (Complexity Explorer | Santa Fe Institute)

About the Course:

Probability and statistics have long helped scientists make sense of data about the natural world — to find meaningful signals in the noise. But classical statistics prove a little threadbare in today’s landscape of large datasets, which are driving new insights in disciplines ranging from biology to ecology to economics. It's as true in biology, with the advent of genome sequencing, as it is in astronomy, with telescope surveys charting the entire sky.

The data have changed. Maybe it's time our data analysis tools did, too.
During this three-month online course, starting June 11th, instructors Hector Zenil and Narsis Kiani will introduce students to concepts from the exciting new field of Algorithm Information Dynamics to search for solutions to fundamental questions about causality — that is, why a particular set of circumstances lead to a particular outcome.

Algorithmic Information Dynamics (or Algorithmic Dynamics in short) is a new type of discrete calculus based on computer programming to study causation by generating mechanistic models to help find first principles of physical phenomena building up the next generation of machine learning.

The course covers key aspects from graph theory and network science, information theory, dynamical systems and algorithmic complexity. It will venture into ongoing research in fundamental science and its applications to behavioral, evolutionary and molecular biology.

Prerequisites:
Students should have basic knowledge of college-level math or physics, though optional sessions will help students with more technical concepts. Basic computer programming skills are also desirable, though not required. The course does not require students to adopt any particular programming language for the Wolfram Language will be mostly used and the instructors will share a lot of code written in this language that student will be able to use, study and exploit for their own purposes.

Course Outline:

  • The course will begin with a conceptual overview of the field.
  • Then it will review foundational theories like basic concepts of statistics and probability, notions of computability and algorithmic complexity, and brief introductions to graph theory and dynamical systems.
  • Finally, the course explores new measures and tools related to reprogramming artificial and biological systems. It will showcase the tools and framework in applications to systems biology, genetic networks and cognition by way of behavioral sequences.
  • Students will be able apply the tools to their own data and problems. The instructors will explain  in detail how to do this, and  will provide all the tools and code to do so.

The course runs 11 June through 03 September 2018.

Tuition is $50 required to get to the course material during the course and a certificate at the end but is is free to watch and if no fee is paid materials will not be available until the course closes. Donations are highly encouraged and appreciated in support for SFI's ComplexityExplorer to continue offering  new courses.

In addition to all course materials tuition includes:

  • Six-month access to the Wolfram|One platform (potentially renewable by other six) worth 150 to 300 USD.
  • Free digital copy of the course textbook to be published by Cambridge University Press.
  • Several gifts will be given away to the top students finishing the course, check the FAQ page for more details.

Best final projects will be invited to expand their results and submit them to the journal Complex Systems, the first journal in the field founded by Stephen Wolfram in 1987.

About the Instructor(s):

Hector Zenil has a PhD in Computer Science from the University of Lille 1 and a PhD in Philosophy and Epistemology from the Pantheon-Sorbonne University of Paris. He co-leads the Algorithmic Dynamics Lab at the Science for Life Laboratory (SciLifeLab), Unit of Computational Medicine, Center for Molecular Medicine at the Karolinska Institute in Stockholm, Sweden. He is also the head of the Algorithmic Nature Group at LABoRES, the Paris-based lab that started the Online Algorithmic Complexity Calculator and the Human Randomness Perception and Generation Project. Previously, he was a Research Associate at the Behavioural and Evolutionary Theory Lab at the Department of Computer Science at the University of Sheffield in the UK before joining the Department of Computer Science, University of Oxford as a faculty member and senior researcher.

Narsis Kiani has a PhD in Mathematics and has been a postdoctoral researcher at Dresden University of Technology and at the University of Heidelberg in Germany. She has been a VINNOVA Marie Curie Fellow and Assistant Professor in Sweden. She co-leads the Algorithmic Dynamics Lab at the Science for Life Laboratory (SciLifeLab), Unit of Computational Medicine, Center for Molecular Medicine at the Karolinska Institute in Stockholm, Sweden. Narsis is also a member of the Algorithmic Nature Group, LABoRES.

Hector and Narsis are the leaders of the Algorithmic Dynamics Lab at the Unit of Computational Medicine at Karolinska Institute.

TA:
Alyssa Adams has a PhD in Physics from Arizona State University and studies what makes living systems different from non-living ones. She currently works at Veda Data Solutions as a data scientist and researcher in social complex systems that are represented by large datasets. She completed an internship at Microsoft Research, Cambridge, UK studying machine learning agents in Minecraft, which is an excellent arena for simple and advanced tasks related to living and social activity. Alyssa is also a member of the Algorithmic Nature Group, LABoRES.

The development of the course and material offered has been supported by: 

  • The Foundational Questions Institute (FQXi)
  • Wolfram Research
  • John Templeton Foundation
  • Santa Fe Institute
  • Swedish Research Council (Vetenskapsrådet)
  • Algorithmic Nature Group, LABoRES for the Natural and Digital Sciences
  • Living Systems Lab, King Abdullah University of Science and Technology.
  • Department of Computer Science, Oxford University
  • Cambridge University Press
  • London Mathematical Society
  • Springer Verlag
  • ItBit for the Natural and Computational Sciences and, of course,
  • the Algorithmic Dynamics lab, Unit of Computational Medicine, SciLifeLab, Center for Molecular Medicine, The Karolinska Institute

Class Introduction:Class IntroductionHow to use Complexity Explorer:How to use Complexity Explorer

Course dates: 11 Jun 2018 9pm PDT to 03 Sep 2018 10pm PDT


Syllabus

  1. A Computational Approach to Causality
  2. A Brief Introduction to Graph Theory and Biological Networks
  3. Elements of Information Theory and Computability
  4. Randomness and Algorithmic Complexity
  5. Dynamical Systems as Models of the World
  6. Practice, Technical Skills and Selected Topics
  7. Algorithmic Information Dynamics and Reprogrammability
  8. Applications to Behavioural, Evolutionary and Molecular Biology

FAQ

Another interesting course from the SFI. Looks like an interesting way to spend the summer.

Following Michael Levin

Followed Michael Levin (ase.tufts.edu)

Investigating information storage and processing in biological systems

We work on novel ways to understand and control complex pattern formation. We use techniques of molecular genetics, biophysics, and computational modeling to address large-scale control of growth and form. We work in whole frogs and flatworms, and sometimes zebrafish and human tissues in culture. Our projects span regeneration, embryogenesis, cancer, and learning plasticity – all examples of how cellular networks process information. In all of these efforts, our goal is not only to understand the molecular mechanisms necessary for morphogenesis, but also to uncover and exploit the cooperative signaling dynamics that enable complex bodies to build and remodel themselves toward a correct structure. Our major goal is to understand how individual cell behaviors are orchestrated towards appropriate large-scale outcomes despite unpredictable environmental perturbations.

👓 How Many Genes Do Cells Need? Maybe Almost All of Them | Quanta Magazine

Read How Many Genes Do Cells Need? Maybe Almost All of Them (Quanta Magazine)
An ambitious study in yeast shows that the health of cells depends on the highly intertwined effects of many genes, few of which can be deleted together without consequence.
There could be some interesting data to play with here if available.

I also can’t help but wonder about applying some of Stuart Kauffman’s ideas to something like this. In particular, this sounds very reminiscent to his analogy of what happens when one strings thread randomly among a pile of buttons and the resulting complexity.

👓 Living Bits: Information and the Origin of Life | PBS

Highlights, Quotes, & Marginalia

our existence can succinctly be described as “information that can replicate itself,” the immediate follow-up question is, “Where did this information come from?”

from an information perspective, only the first step in life is difficult. The rest is just a matter of time.

Through decades of work by legions of scientists, we now know that the process of Darwinian evolution tends to lead to an increase in the information coded in genes. That this must happen on average is not difficult to see. Imagine I start out with a genome encoding n bits of information. In an evolutionary process, mutations occur on the many representatives of this information in a population. The mutations can change the amount of information, or they can leave the information unchanged. If the information changes, it can increase or decrease. But very different fates befall those two different changes. The mutation that caused a decrease in information will generally lead to lower fitness, as the information stored in our genes is used to build the organism and survive. If you know less than your competitors about how to do this, you are unlikely to thrive as well as they do. If, on the other hand, you mutate towards more information—meaning better prediction—you are likely to use that information to have an edge in survival.

There are some plants with huge amounts of DNA compared to their “peers”–perhaps these would be interesting test cases for potential experimentation of this?

🔖 [1801.06022] Reconstruction Codes for DNA Sequences with Uniform Tandem-Duplication Errors | arXiv

Bookmarked Reconstruction Codes for DNA Sequences with Uniform Tandem-Duplication Errors by Yonatan Yehezkeally and Moshe Schwartz (arxiv.org)
DNA as a data storage medium has several advantages, including far greater data density compared to electronic media. We propose that schemes for data storage in the DNA of living organisms may benefit from studying the reconstruction problem, which is applicable whenever multiple reads of noisy data are available. This strategy is uniquely suited to the medium, which inherently replicates stored data in multiple distinct ways, caused by mutations. We consider noise introduced solely by uniform tandem-duplication, and utilize the relation to constant-weight integer codes in the Manhattan metric. By bounding the intersection of the cross-polytope with hyperplanes, we prove the existence of reconstruction codes with greater capacity than known error-correcting codes, which we can determine analytically for any set of parameters.

📖 Read pages 19-52 of The Vital Question: Energy, Evolution, and the Origins of Complex Life by Nick Lane

📖 Read Chapter 1: What is Life? pages 19-52 in The Vital Question: Energy, Evolution, and the Origins of Complex Life by Nick Lane (W.W. Norton,
, ISBN: 978-0393088816)

Lane lays out a “brief” history of the 4 billion years of life on Earth. Discusses isotopic fractionation and other evidence that essentially shows a bottleneck between bacteria and archaea (procaryotes) on the one hand and eucaryotes on the other, the latter of which all must have had a single common ancestor based on the genetic profiles we currently see. He suggest that while we should see even more diversity of complex life, we do not, and he hints at the end of the chapter that the reason is energy.

In general, it’s much easier to follow than I anticipated it might be. His writing style is lucid and fluid and he has some lovely prose not often seen in books of this sort. It’s quite a pleasure to read. Additionally he’s doing a very solid job of building an argument in small steps.

I’m watching closely how he’s repeatedly using the word information in his descriptions, and it seems to be a much more universal and colloquial version than the more technical version, but something interesting may come out of it from my philosophical leanings. I can’t wait to get further into the book to see how things develop.

book cover of Nick Lane's The Vital Question
The Vital Question: Energy, Evolution and the Origins of Complex Life by Nick Lane