From the celebrated neurobiologist and primatologist, a landmark, genre-defining examination of human behavior, both good and bad, and an answer to the question: Why do we do the things we do?
Sapolsky's storytelling concept is delightful but it also has a powerful intrinsic logic: he starts by looking at the factors that bear on a person's reaction in the precise moment a behavior occurs, and then hops back in time from there, in stages, ultimately ending up at the deep history of our species and its evolutionary legacy.
And so the first category of explanation is the neurobiological one. A behavior occurs--whether an example of humans at our best, worst, or somewhere in between. What went on in a person's brain a second before the behavior happened? Then Sapolsky pulls out to a slightly larger field of vision, a little earlier in time: What sight, sound, or smell caused the nervous system to produce that behavior? And then, what hormones acted hours to days earlier to change how responsive that individual is to the stimuli that triggered the nervous system? By now he has increased our field of vision so that we are thinking about neurobiology and the sensory world of our environment and endocrinology in trying to explain what happened.
Sapolsky keeps going: How was that behavior influenced by structural changes in the nervous system over the preceding months, by that person's adolescence, childhood, fetal life, and then back to his or her genetic makeup? Finally, he expands the view to encompass factors larger than one individual. How did culture shape that individual's group, what ecological factors millennia old formed that culture? And on and on, back to evolutionary factors millions of years old.
The result is one of the most dazzling tours d'horizon of the science of human behavior ever attempted, a majestic synthesis that harvests cutting-edge research across a range of disciplines to provide a subtle and nuanced perspective on why we ultimately do the things we do...for good and for ill. Sapolsky builds on this understanding to wrestle with some of our deepest and thorniest questions relating to tribalism and xenophobia, hierarchy and competition, morality and free will, and war and peace. Wise, humane, often very funny, Behave is a towering achievement, powerfully humanizing, and downright heroic in its own right.
I just learned that a second nervous system has now been fully mapped (along with C. elegans) and it too is a small world. All hail the tadpole larva of a sea squirt, and its marvelously tiny connectome of 177 neurons:https://t.co/kO3tlEVq5x— Steven Strogatz (@stevenstrogatz) June 5, 2018
About the Course:
Probability and statistics have long helped scientists make sense of data about the natural world — to find meaningful signals in the noise. But classical statistics prove a little threadbare in today’s landscape of large datasets, which are driving new insights in disciplines ranging from biology to ecology to economics. It's as true in biology, with the advent of genome sequencing, as it is in astronomy, with telescope surveys charting the entire sky.
The data have changed. Maybe it's time our data analysis tools did, too.
During this three-month online course, starting June 11th, instructors Hector Zenil and Narsis Kiani will introduce students to concepts from the exciting new field of Algorithm Information Dynamics to search for solutions to fundamental questions about causality — that is, why a particular set of circumstances lead to a particular outcome.
Algorithmic Information Dynamics (or Algorithmic Dynamics in short) is a new type of discrete calculus based on computer programming to study causation by generating mechanistic models to help find first principles of physical phenomena building up the next generation of machine learning.
The course covers key aspects from graph theory and network science, information theory, dynamical systems and algorithmic complexity. It will venture into ongoing research in fundamental science and its applications to behavioral, evolutionary and molecular biology.
Students should have basic knowledge of college-level math or physics, though optional sessions will help students with more technical concepts. Basic computer programming skills are also desirable, though not required. The course does not require students to adopt any particular programming language for the Wolfram Language will be mostly used and the instructors will share a lot of code written in this language that student will be able to use, study and exploit for their own purposes.
- The course will begin with a conceptual overview of the field.
- Then it will review foundational theories like basic concepts of statistics and probability, notions of computability and algorithmic complexity, and brief introductions to graph theory and dynamical systems.
- Finally, the course explores new measures and tools related to reprogramming artificial and biological systems. It will showcase the tools and framework in applications to systems biology, genetic networks and cognition by way of behavioral sequences.
- Students will be able apply the tools to their own data and problems. The instructors will explain in detail how to do this, and will provide all the tools and code to do so.
The course runs 11 June through 03 September 2018.
Tuition is $50 required to get to the course material during the course and a certificate at the end but is is free to watch and if no fee is paid materials will not be available until the course closes. Donations are highly encouraged and appreciated in support for SFI's ComplexityExplorer to continue offering new courses.
In addition to all course materials tuition includes:
- Six-month access to the Wolfram|One platform (potentially renewable by other six) worth 150 to 300 USD.
- Free digital copy of the course textbook to be published by Cambridge University Press.
- Several gifts will be given away to the top students finishing the course, check the FAQ page for more details.
Best final projects will be invited to expand their results and submit them to the journal Complex Systems, the first journal in the field founded by Stephen Wolfram in 1987.
About the Instructor(s):
Hector Zenil has a PhD in Computer Science from the University of Lille 1 and a PhD in Philosophy and Epistemology from the Pantheon-Sorbonne University of Paris. He co-leads the Algorithmic Dynamics Lab at the Science for Life Laboratory (SciLifeLab), Unit of Computational Medicine, Center for Molecular Medicine at the Karolinska Institute in Stockholm, Sweden. He is also the head of the Algorithmic Nature Group at LABoRES, the Paris-based lab that started the Online Algorithmic Complexity Calculator and the Human Randomness Perception and Generation Project. Previously, he was a Research Associate at the Behavioural and Evolutionary Theory Lab at the Department of Computer Science at the University of Sheffield in the UK before joining the Department of Computer Science, University of Oxford as a faculty member and senior researcher.
Narsis Kiani has a PhD in Mathematics and has been a postdoctoral researcher at Dresden University of Technology and at the University of Heidelberg in Germany. She has been a VINNOVA Marie Curie Fellow and Assistant Professor in Sweden. She co-leads the Algorithmic Dynamics Lab at the Science for Life Laboratory (SciLifeLab), Unit of Computational Medicine, Center for Molecular Medicine at the Karolinska Institute in Stockholm, Sweden. Narsis is also a member of the Algorithmic Nature Group, LABoRES.
Hector and Narsis are the leaders of the Algorithmic Dynamics Lab at the Unit of Computational Medicine at Karolinska Institute.
Alyssa Adams has a PhD in Physics from Arizona State University and studies what makes living systems different from non-living ones. She currently works at Veda Data Solutions as a data scientist and researcher in social complex systems that are represented by large datasets. She completed an internship at Microsoft Research, Cambridge, UK studying machine learning agents in Minecraft, which is an excellent arena for simple and advanced tasks related to living and social activity. Alyssa is also a member of the Algorithmic Nature Group, LABoRES.
The development of the course and material offered has been supported by:
- The Foundational Questions Institute (FQXi)
- Wolfram Research
- John Templeton Foundation
- Santa Fe Institute
- Swedish Research Council (Vetenskapsrådet)
- Algorithmic Nature Group, LABoRES for the Natural and Digital Sciences
- Living Systems Lab, King Abdullah University of Science and Technology.
- Department of Computer Science, Oxford University
- Cambridge University Press
- London Mathematical Society
- Springer Verlag
- ItBit for the Natural and Computational Sciences and, of course,
- the Algorithmic Dynamics lab, Unit of Computational Medicine, SciLifeLab, Center for Molecular Medicine, The Karolinska Institute
Course dates: 11 Jun 2018 9pm PDT to 03 Sep 2018 10pm PDT
- A Computational Approach to Causality
- A Brief Introduction to Graph Theory and Biological Networks
- Elements of Information Theory and Computability
- Randomness and Algorithmic Complexity
- Dynamical Systems as Models of the World
- Practice, Technical Skills and Selected Topics
- Algorithmic Information Dynamics and Reprogrammability
- Applications to Behavioural, Evolutionary and Molecular Biology
Another interesting course from the SFI. Looks like an interesting way to spend the summer.
Investigating information storage and processing in biological systems
We work on novel ways to understand and control complex pattern formation. We use techniques of molecular genetics, biophysics, and computational modeling to address large-scale control of growth and form. We work in whole frogs and flatworms, and sometimes zebrafish and human tissues in culture. Our projects span regeneration, embryogenesis, cancer, and learning plasticity – all examples of how cellular networks process information. In all of these efforts, our goal is not only to understand the molecular mechanisms necessary for morphogenesis, but also to uncover and exploit the cooperative signaling dynamics that enable complex bodies to build and remodel themselves toward a correct structure. Our major goal is to understand how individual cell behaviors are orchestrated towards appropriate large-scale outcomes despite unpredictable environmental perturbations.
An ambitious study in yeast shows that the health of cells depends on the highly intertwined effects of many genes, few of which can be deleted together without consequence.
There could be some interesting data to play with here if available.
I also can’t help but wonder about applying some of Stuart Kauffman’s ideas to something like this. In particular, this sounds very reminiscent to his analogy of what happens when one strings thread randomly among a pile of buttons and the resulting complexity.
Study in bacteria shows how regularly DNA changes and how few of those changes are deadly
This is a rather cool little experiment.
h/t to @moorejh via Twitter:
— Jason H. Moore, PhD (@moorejh) March 16, 2018
Bookmarked on March 16, 2018 at 12:15PMSyndicated copies to:
Highlights, Quotes, & Marginalia
our existence can succinctly be described as “information that can replicate itself,” the immediate follow-up question is, “Where did this information come from?”
from an information perspective, only the first step in life is difficult. The rest is just a matter of time.
Through decades of work by legions of scientists, we now know that the process of Darwinian evolution tends to lead to an increase in the information coded in genes. That this must happen on average is not difficult to see. Imagine I start out with a genome encoding n bits of information. In an evolutionary process, mutations occur on the many representatives of this information in a population. The mutations can change the amount of information, or they can leave the information unchanged. If the information changes, it can increase or decrease. But very different fates befall those two different changes. The mutation that caused a decrease in information will generally lead to lower fitness, as the information stored in our genes is used to build the organism and survive. If you know less than your competitors about how to do this, you are unlikely to thrive as well as they do. If, on the other hand, you mutate towards more information—meaning better prediction—you are likely to use that information to have an edge in survival.
There are some plants with huge amounts of DNA compared to their “peers”–perhaps these would be interesting test cases for potential experimentation of this?
A summer school for advanced undergraduates June 11-22, 2018 @ Princeton University What would it mean to have a physicist’s understanding of life? How do DYNAMICS and the EMERGENCE of ORDER affect biological function? How do organisms process INFORMATION, LEARN, ADAPT, and EVOLVE? See how physics problems emerge from thinking about developing embryos, communicating bacteria, dynamic neural networks, animal behaviors, evolution, and more. Learn how ideas and methods from statistical physics, simulation and data analysis, optics and microscopy connect to diverse biological phenomena. Explore these questions, tools, and concepts in an intense two weeks of lectures, seminars, hands-on exercises, and projects.
The International Conference on Complex Systems is a unique interdisciplinary forum that unifies and bridges the traditional domains of science and a multitude of real world systems. Participants will contribute and be exposed to mind expanding concepts and methods from across the diverse field of complex systems science. The conference will be held July 22-27, 2018, in Cambridge, MA, USA. Special Topic - Artificial Intelligence: This year’s conference will include a day on AI, including its development and potential future. This session will be chaired by Iyad Rahwan of MIT's Media Lab.
A great looking conference coming up with a strong line up of people who’s work I appreciate. It could certainly use some more balance however as it’s almost all white men.
In particular I’d want to see:
Albert-László Barabási (Northeastern University, USA)
Nassim Nicholas Taleb (Real World Risk Institute, USA)
Stuart Kauffman (Institute for Systems Biology, USA)
Simon DeDeo (Carnegie Mellon University, USA)
Stephen Wolfram (Wolfram Research)
César Hidalgo (MIT Media Lab, USA)
Marta González (University of California Berkeley, USA)
Peter Turchin (University of Connecticut, USA)
Mercedes Pascual (University of Chicago, USA) Pending confirmation
Iyad Rahwan (MIT Media Lab, USA)
Sandy Pentland (MIT Media Lab, USA)
Theresa Whelan (U.S. Department of Defense) Pending DOD approval
H. Eugene Stanley (Boston University, USA)
Ricardo Hausmann (Harvard University, USA)
Stephen Grossberg (Boston University, USA)
Daniela Rus (MIT Computer Science & Artificial Intelligence Lab, USA) Pending confirmation
Olaf Sporns (Indiana University Network Science Institute, USA)
Michelle Girvan (University of Maryland, USA) Pending confirmation
Cameron Kerry (MIT Media Lab, USA)
Irving Epstein (Brandeis University, USA)
DNA as a data storage medium has several advantages, including far greater data density compared to electronic media. We propose that schemes for data storage in the DNA of living organisms may benefit from studying the reconstruction problem, which is applicable whenever multiple reads of noisy data are available. This strategy is uniquely suited to the medium, which inherently replicates stored data in multiple distinct ways, caused by mutations. We consider noise introduced solely by uniform tandem-duplication, and utilize the relation to constant-weight integer codes in the Manhattan metric. By bounding the intersection of the cross-polytope with hyperplanes, we prove the existence of reconstruction codes with greater capacity than known error-correcting codes, which we can determine analytically for any set of parameters.
📖 Read Chapter 1: What is Life? pages 19-52 in The Vital Question: Energy, Evolution, and the Origins of Complex Life by (W.W. Norton,
, ISBN: 978-0393088816)
Lane lays out a “brief” history of the 4 billion years of life on Earth. Discusses isotopic fractionation and other evidence that essentially shows a bottleneck between bacteria and archaea (procaryotes) on the one hand and eucaryotes on the other, the latter of which all must have had a single common ancestor based on the genetic profiles we currently see. He suggest that while we should see even more diversity of complex life, we do not, and he hints at the end of the chapter that the reason is energy.
In general, it’s much easier to follow than I anticipated it might be. His writing style is lucid and fluid and he has some lovely prose not often seen in books of this sort. It’s quite a pleasure to read. Additionally he’s doing a very solid job of building an argument in small steps.
I’m watching closely how he’s repeatedly using the word information in his descriptions, and it seems to be a much more universal and colloquial version than the more technical version, but something interesting may come out of it from my philosophical leanings. I can’t wait to get further into the book to see how things develop.Syndicated copies to:
📗 Started reading pages 1-18 Introduction: Why is Life the Way it is in The Vital Question: Energy, Evolution, and the Origins of Complex Life by
A quick, but interesting peek into where he intends to go. He lays out some quick background here in the opening. He’s generally a very lucid writer so far. Can’t wait to get in further.
Some may feel like some of the terminology is a hurdle in the opening, so I hope he circles around to define some of his terms a bit better for the audience I suspect he’s trying to reach.Syndicated copies to:
All living things are made of cells, and all cells are powered by electrochemical charges across thin lipid membranes — the ‘proton motive force.’ We know how these electrical charges are generated by protein machines at virtually atomic resolution, but we know very little about how membrane bioenergetics first arose. By tracking back cellular evolution to the last universal common ancestor and beyond, scientist Nick Lane argues that geologically sustained electrochemical charges across semiconducting barriers were central to both energy flow and the formation of new organic matter — growth — at the very origin of life. Dr. Lane is a professor of evolutionary biochemistry in the Department of Genetics, Evolution and Environment at University College London. His research focuses on how energy flow constrains evolution from the origin of life to the traits of complex multicellular organisms. He is a co-director of the new Centre for Life’s Origins and Evolution (CLOE) at UCL, and author of four celebrated books on life’s origins and evolution. His work has been recognized by the Biochemical Society Award in 2015 and the Royal Society Michael Faraday Prize in 2016.
According to Google Scholar, Turing's paper inventing modern computing is only his _second_ most cited paper pic.twitter.com/T1M4k4dMYK
— michael_nielsen (@michael_nielsen) October 7, 2015
Looks like Alan Turing, like Claude Shannon, was interested in microbiology too! I’ll have to dig into this. [pdf]