👓 The role of information theory in chemistry | Chemistry World

Read The role of information theory in chemistry by Philip Ball (Chemistry World)
Is chemistry an information science after all?

Discussion of some potential interesting directions for application of information theory to chemistry (and biology).

In the 1990s, Nobel laureate Jean-Marie Lehn argued that the principles of spontaneous self-assembly and self-organisation, which he had helped to elucidate in supramolecular chemistry, could give rise to a science of ‘informed matter’ beyond the molecule.

Syndicated copies to:

👓 Andrew Jordan reviews Peter Woit’s Quantum Theory, Groups and Representations and finds much to admire. | Inference

Read Woit’s Way by Andrew Jordan (Inference: International Review of Science)
Andrew Jordan reviews Peter Woit's Quantum Theory, Groups and Representations and finds much to admire.

For the tourists, I’ve noted before that Peter maintains a free copy of his new textbook on his website.

I also don’t think I’ve ever come across the journal Inference before, but it looks quite nice in terms of content and editorial.

Syndicated copies to:

👓 Algorithmic Information Dynamics: A Computational Approach to Causality and Living Systems From Networks to Cells | Complexity Explorer | Santa Fe Institute

Read Algorithmic Information Dynamics: A Computational Approach to Causality and Living Systems From Networks to Cells (Complexity Explorer | Santa Fe Institute)

About the Course:

Probability and statistics have long helped scientists make sense of data about the natural world — to find meaningful signals in the noise. But classical statistics prove a little threadbare in today’s landscape of large datasets, which are driving new insights in disciplines ranging from biology to ecology to economics. It's as true in biology, with the advent of genome sequencing, as it is in astronomy, with telescope surveys charting the entire sky.

The data have changed. Maybe it's time our data analysis tools did, too.
During this three-month online course, starting June 11th, instructors Hector Zenil and Narsis Kiani will introduce students to concepts from the exciting new field of Algorithm Information Dynamics to search for solutions to fundamental questions about causality — that is, why a particular set of circumstances lead to a particular outcome.

Algorithmic Information Dynamics (or Algorithmic Dynamics in short) is a new type of discrete calculus based on computer programming to study causation by generating mechanistic models to help find first principles of physical phenomena building up the next generation of machine learning.

The course covers key aspects from graph theory and network science, information theory, dynamical systems and algorithmic complexity. It will venture into ongoing research in fundamental science and its applications to behavioral, evolutionary and molecular biology.

Prerequisites:
Students should have basic knowledge of college-level math or physics, though optional sessions will help students with more technical concepts. Basic computer programming skills are also desirable, though not required. The course does not require students to adopt any particular programming language for the Wolfram Language will be mostly used and the instructors will share a lot of code written in this language that student will be able to use, study and exploit for their own purposes.

Course Outline:

  • The course will begin with a conceptual overview of the field.
  • Then it will review foundational theories like basic concepts of statistics and probability, notions of computability and algorithmic complexity, and brief introductions to graph theory and dynamical systems.
  • Finally, the course explores new measures and tools related to reprogramming artificial and biological systems. It will showcase the tools and framework in applications to systems biology, genetic networks and cognition by way of behavioral sequences.
  • Students will be able apply the tools to their own data and problems. The instructors will explain  in detail how to do this, and  will provide all the tools and code to do so.

The course runs 11 June through 03 September 2018.

Tuition is $50 required to get to the course material during the course and a certificate at the end but is is free to watch and if no fee is paid materials will not be available until the course closes. Donations are highly encouraged and appreciated in support for SFI's ComplexityExplorer to continue offering  new courses.

In addition to all course materials tuition includes:

  • Six-month access to the Wolfram|One platform (potentially renewable by other six) worth 150 to 300 USD.
  • Free digital copy of the course textbook to be published by Cambridge University Press.
  • Several gifts will be given away to the top students finishing the course, check the FAQ page for more details.

Best final projects will be invited to expand their results and submit them to the journal Complex Systems, the first journal in the field founded by Stephen Wolfram in 1987.

About the Instructor(s):

Hector Zenil has a PhD in Computer Science from the University of Lille 1 and a PhD in Philosophy and Epistemology from the Pantheon-Sorbonne University of Paris. He co-leads the Algorithmic Dynamics Lab at the Science for Life Laboratory (SciLifeLab), Unit of Computational Medicine, Center for Molecular Medicine at the Karolinska Institute in Stockholm, Sweden. He is also the head of the Algorithmic Nature Group at LABoRES, the Paris-based lab that started the Online Algorithmic Complexity Calculator and the Human Randomness Perception and Generation Project. Previously, he was a Research Associate at the Behavioural and Evolutionary Theory Lab at the Department of Computer Science at the University of Sheffield in the UK before joining the Department of Computer Science, University of Oxford as a faculty member and senior researcher.

Narsis Kiani has a PhD in Mathematics and has been a postdoctoral researcher at Dresden University of Technology and at the University of Heidelberg in Germany. She has been a VINNOVA Marie Curie Fellow and Assistant Professor in Sweden. She co-leads the Algorithmic Dynamics Lab at the Science for Life Laboratory (SciLifeLab), Unit of Computational Medicine, Center for Molecular Medicine at the Karolinska Institute in Stockholm, Sweden. Narsis is also a member of the Algorithmic Nature Group, LABoRES.

Hector and Narsis are the leaders of the Algorithmic Dynamics Lab at the Unit of Computational Medicine at Karolinska Institute.

TA:
Alyssa Adams has a PhD in Physics from Arizona State University and studies what makes living systems different from non-living ones. She currently works at Veda Data Solutions as a data scientist and researcher in social complex systems that are represented by large datasets. She completed an internship at Microsoft Research, Cambridge, UK studying machine learning agents in Minecraft, which is an excellent arena for simple and advanced tasks related to living and social activity. Alyssa is also a member of the Algorithmic Nature Group, LABoRES.

The development of the course and material offered has been supported by: 

  • The Foundational Questions Institute (FQXi)
  • Wolfram Research
  • John Templeton Foundation
  • Santa Fe Institute
  • Swedish Research Council (Vetenskapsrådet)
  • Algorithmic Nature Group, LABoRES for the Natural and Digital Sciences
  • Living Systems Lab, King Abdullah University of Science and Technology.
  • Department of Computer Science, Oxford University
  • Cambridge University Press
  • London Mathematical Society
  • Springer Verlag
  • ItBit for the Natural and Computational Sciences and, of course,
  • the Algorithmic Dynamics lab, Unit of Computational Medicine, SciLifeLab, Center for Molecular Medicine, The Karolinska Institute

Class Introduction:Class IntroductionHow to use Complexity Explorer:How to use Complexity Explorer

Course dates: 11 Jun 2018 9pm PDT to 03 Sep 2018 10pm PDT


Syllabus

  1. A Computational Approach to Causality
  2. A Brief Introduction to Graph Theory and Biological Networks
  3. Elements of Information Theory and Computability
  4. Randomness and Algorithmic Complexity
  5. Dynamical Systems as Models of the World
  6. Practice, Technical Skills and Selected Topics
  7. Algorithmic Information Dynamics and Reprogrammability
  8. Applications to Behavioural, Evolutionary and Molecular Biology

FAQ

Another interesting course from the SFI. Looks like an interesting way to spend the summer.

Syndicated copies to:

👓 Hiding Information in Plain Text | Spectrum IEEE

Read Hiding Information in Plain Text (IEEE Spectrum: Technology, Engineering, and Science News)
Subtle changes to letter shapes can embed messages

An interesting piece to be sure, but I’ve thought of doing this sort of steganography in the past. In particular, I recall having conversations with Sol Golomb about similar techniques in the past. I’m sure there’s got to be prior art for similar things as well.

Syndicated copies to:

👓 Does Donald Trump write his own tweets? Sometimes | The Boston Globe

Read Does Donald Trump write his own tweets? Sometimes (The Boston Globe)
It’s not always Trump tapping out a tweet, even when it sounds like his voice.

I wonder how complicated/in-depth the applied information theory is behind the Twitter bot described here?

Syndicated copies to:

Following Michael Levin

Followed Michael Levin (ase.tufts.edu)

Investigating information storage and processing in biological systems

We work on novel ways to understand and control complex pattern formation. We use techniques of molecular genetics, biophysics, and computational modeling to address large-scale control of growth and form. We work in whole frogs and flatworms, and sometimes zebrafish and human tissues in culture. Our projects span regeneration, embryogenesis, cancer, and learning plasticity – all examples of how cellular networks process information. In all of these efforts, our goal is not only to understand the molecular mechanisms necessary for morphogenesis, but also to uncover and exploit the cooperative signaling dynamics that enable complex bodies to build and remodel themselves toward a correct structure. Our major goal is to understand how individual cell behaviors are orchestrated towards appropriate large-scale outcomes despite unpredictable environmental perturbations.

Syndicated copies to:

👓 How Many Genes Do Cells Need? Maybe Almost All of Them | Quanta Magazine

Read How Many Genes Do Cells Need? Maybe Almost All of Them (Quanta Magazine)
An ambitious study in yeast shows that the health of cells depends on the highly intertwined effects of many genes, few of which can be deleted together without consequence.

There could be some interesting data to play with here if available.

I also can’t help but wonder about applying some of Stuart Kauffman’s ideas to something like this. In particular, this sounds very reminiscent to his analogy of what happens when one strings thread randomly among a pile of buttons and the resulting complexity.

Syndicated copies to:

👓 Voting me, voting you: Eurovision | The Economist (Espresso)

Read Voting me, voting you: Eurovision (Economist Espresso)
​The competition, whose finals play out tonight, is as famed for its politics as its cheesy

I often read the Economist’s Espresso daily round up, but don’t explicitly post that I do. I’m making an exception in this case because I find the voting partnerships mentioned here quite interesting. Might be worth delving into some of the underlying voting statistics for potential application to other real life examples. I’m also enamored of the nice visualization they provide. I wonder what the overlap of this data is with other related world politics looks like?

Syndicated copies to:

🔖 The Theory of Quantum Information by John Watrous

Bookmarked The Theory of Quantum Information by Tom Watrous (cs.uwaterloo.ca)

To be published by Cambridge University Press in April 2018.

Upon publication this book will be available for purchase through Cambridge University Press and other standard distribution channels. Please see the publisher's web page to pre-order the book or to obtain further details on its publication date.

A draft, pre-publication copy of the book can be found below. This draft copy is made available for personal use only and must not be sold or redistributed.

This largely self-contained book on the theory of quantum information focuses on precise mathematical formulations and proofs of fundamental facts that form the foundation of the subject. It is intended for graduate students and researchers in mathematics, computer science, and theoretical physics seeking to develop a thorough understanding of key results, proof techniques, and methodologies that are relevant to a wide range of research topics within the theory of quantum information and computation. The book is accessible to readers with an understanding of basic mathematics, including linear algebra, mathematical analysis, and probability theory. An introductory chapter summarizes these necessary mathematical prerequisites, and starting from this foundation, the book includes clear and complete proofs of all results it presents. Each subsequent chapter includes challenging exercises intended to help readers to develop their own skills for discovering proofs concerning the theory of quantum information.

h/t to @michael_nielsen via Nuzzel

Syndicated copies to:

👓 Large Cache of Texts May Offer Insight Into One of Africa’s Oldest Written Languages | Smithsonian Magazine

Read Large Cache of Texts May Offer Insight Into One of Africa's Oldest Written Languages (Smithsonian)
Archaeologists in Sudan have uncovered the largest assemblage of Meroitic inscriptions to date

This is a cool discovery, in great part because their documentation was interesting enough to be able to suggest further locations to check for more archaeological finds. This might also be something one could apply some linguistic analysis and information theory to in an attempt to better pull apart the language and grammar.

h/t to @ArtsJournalNews, bookmarked on April 17, 2018 at 08:16AM

Syndicated copies to:

🔖 Special Issue : Information Dynamics in Brain and Physiological Networks | Entropy

Bookmarked Special Issue "Information Dynamics in Brain and Physiological Networks" (mdpi.com)

A special issue of Entropy (ISSN 1099-4300). This special issue belongs to the section "Information Theory".

Deadline for manuscript submissions: 30 December 2018

It is, nowadays, widely acknowledged that the brain and several other organ systems, including the cardiovascular, respiratory, and muscular systems, among others, exhibit complex dynamic behaviors that result from the combined effects of multiple regulatory mechanisms, coupling effects and feedback interactions, acting in both space and time.

The field of information theory is becoming more and more relevant for the theoretical description and quantitative assessment of the dynamics of the brain and physiological networks, defining concepts, such as those of information generation, storage, transfer, and modification. These concepts are quantified by several information measures (e.g., approximate entropy, conditional entropy, multiscale entropy, transfer entropy, redundancy and synergy, and many others), which are being increasingly used to investigate how physiological dynamics arise from the activity and connectivity of different structural units, and evolve across a variety of physiological states and pathological conditions.

This Special Issue focuses on blending theoretical developments in the new emerging field of information dynamics with innovative applications targeted to the analysis of complex brain and physiological networks in health and disease. To favor this multidisciplinary view, contributions are welcome from different fields, ranging from mathematics and physics to biomedical engineering, neuroscience, and physiology.

Prof. Dr. Luca Faes
Prof. Dr. Alberto Porta
Prof. Dr. Sebastiano Stramaglia
Guest Editors
Syndicated copies to:

👓 Living Bits: Information and the Origin of Life | PBS

Read

Highlights, Quotes, & Marginalia

our existence can succinctly be described as “information that can replicate itself,” the immediate follow-up question is, “Where did this information come from?”

from an information perspective, only the first step in life is difficult. The rest is just a matter of time.

Through decades of work by legions of scientists, we now know that the process of Darwinian evolution tends to lead to an increase in the information coded in genes. That this must happen on average is not difficult to see. Imagine I start out with a genome encoding n bits of information. In an evolutionary process, mutations occur on the many representatives of this information in a population. The mutations can change the amount of information, or they can leave the information unchanged. If the information changes, it can increase or decrease. But very different fates befall those two different changes. The mutation that caused a decrease in information will generally lead to lower fitness, as the information stored in our genes is used to build the organism and survive. If you know less than your competitors about how to do this, you are unlikely to thrive as well as they do. If, on the other hand, you mutate towards more information—meaning better prediction—you are likely to use that information to have an edge in survival.

There are some plants with huge amounts of DNA compared to their “peers”–perhaps these would be interesting test cases for potential experimentation of this?

Syndicated copies to:

🔖 Efficient Algorithms for Searching the Minimum Information Partition in Integrated Information Theory

Bookmarked Efficient Algorithms for Searching the Minimum Information Partition in Integrated Information Theory by Jun Kitazono, Ryota Kanai, Masafumi Oizumi (MDPI)
The ability to integrate information in the brain is considered to be an essential property for cognition and consciousness. Integrated Information Theory (IIT) hypothesizes that the amount of integrated information ( Φ ) in the brain is related to the level of consciousness. IIT proposes that, to quantify information integration in a system as a whole, integrated information should be measured across the partition of the system at which information loss caused by partitioning is minimized, called the Minimum Information Partition (MIP). The computational cost for exhaustively searching for the MIP grows exponentially with system size, making it difficult to apply IIT to real neural data. It has been previously shown that, if a measure of Φ satisfies a mathematical property, submodularity, the MIP can be found in a polynomial order by an optimization algorithm. However, although the first version of Φ is submodular, the later versions are not. In this study, we empirically explore to what extent the algorithm can be applied to the non-submodular measures of Φ by evaluating the accuracy of the algorithm in simulated data and real neural data. We find that the algorithm identifies the MIP in a nearly perfect manner even for the non-submodular measures. Our results show that the algorithm allows us to measure Φ in large systems within a practical amount of time.

h/t Christoph Adami, Erik Hoel, and @kanair

👓 Stephen Hawking, Who Examined the Universe and Explained Black Holes, Dies at 76 | The New York Times

Read Stephen Hawking, Who Examined the Universe and Explained Black Holes, Dies at 76 by Dennis Overbye (nytimes.com)
A physicist and best-selling author, Dr. Hawking did not allow his physical limitations to hinder his quest to answer “the big question: Where did the universe come from?”

Some sad news after getting back from Algebraic Geometry class tonight. RIP Stephen Hawking.

Syndicated copies to: