🔖 Linking Economic Complexity, Institutions and Income Inequality

Linking Economic Complexity, Institutions and Income Inequality by Dominik Hartmann, Miguel R. Guevara, Cristian Jara-Figueroa, Manuel Aristarán, César A. Hidalgo (arxiv.org)
A country's mix of products predicts its subsequent pattern of diversification and economic growth. But does this product mix also predict income inequality? Here we combine methods from econometrics, network science, and economic complexity to show that countries exporting complex products (as measured by the Economic Complexity Index) have lower levels of income inequality than countries exporting simpler products. Using multivariate regression analysis, we show that economic complexity is a significant and negative predictor of income inequality and that this relationship is robust to controlling for aggregate measures of income, institutions, export concentration, and human capital. Moreover, we introduce a measure that associates a product to a level of income inequality equal to the average GINI of the countries exporting that product (weighted by the share the product represents in that country's export basket). We use this measure together with the network of related products (or product space) to illustrate how the development of new products is associated with changes in income inequality. These findings show that economic complexity captures information about an economy's level of development that is relevant to the ways an economy generates and distributes its income. Moreover, these findings suggest that a country's productive structure may limit its range of income inequality. Finally, we make our results available through an online resource that allows for its users to visualize the structural transformation of over 150 countries and their associated changes in income inequality between 1963 and 2008.

MIT has a pretty good lay-person’s overview of this article. The final published version is separately available.

 

Syndicated copies to:

Income inequality linked to export “complexity” | MIT News

Income inequality linked to export “complexity” by Larry Hardesty (MIT News)
The mix of products that countries export is a good predictor of income distribution, study finds.

Continue reading “Income inequality linked to export “complexity” | MIT News”

🔖 Want to read: From Bacteria to Bach and Back: The Evolution of Minds by Daniel C. Dennett

From Bacteria to Bach and Back: The Evolution of Minds by Daniel C. Dennett (W. W. Norton & Company; 1 edition, 496 pages (February 7, 2017))
One of America’s foremost philosophers offers a major new account of the origins of the conscious mind.

How did we come to have minds?

For centuries, this question has intrigued psychologists, physicists, poets, and philosophers, who have wondered how the human mind developed its unrivaled ability to create, imagine, and explain. Disciples of Darwin have long aspired to explain how consciousness, language, and culture could have appeared through natural selection, blazing promising trails that tend, however, to end in confusion and controversy. Even though our understanding of the inner workings of proteins, neurons, and DNA is deeper than ever before, the matter of how our minds came to be has largely remained a mystery.

That is now changing, says Daniel C. Dennett. In From Bacteria to Bach and Back, his most comprehensive exploration of evolutionary thinking yet, he builds on ideas from computer science and biology to show how a comprehending mind could in fact have arisen from a mindless process of natural selection. Part philosophical whodunit, part bold scientific conjecture, this landmark work enlarges themes that have sustained Dennett’s legendary career at the forefront of philosophical thought.

In his inimitable style―laced with wit and arresting thought experiments―Dennett explains that a crucial shift occurred when humans developed the ability to share memes, or ways of doing things not based in genetic instinct. Language, itself composed of memes, turbocharged this interplay. Competition among memes―a form of natural selection―produced thinking tools so well-designed that they gave us the power to design our own memes. The result, a mind that not only perceives and controls but can create and comprehend, was thus largely shaped by the process of cultural evolution.

An agenda-setting book for a new generation of philosophers, scientists, and thinkers, From Bacteria to Bach and Back will delight and entertain anyone eager to make sense of how the mind works and how it came about.

4 color, 18 black-and-white illustrations

🔖 Want to read: From Bacteria to Bach and Back: The Evolution of Minds by Daniel C. Dennett

 

Syndicated copies to:

🔖 The Hypercycle: A Principle of Natural Self-Organization | Springer

The Hypercycle - A Principle of Natural Self-Organization | M. Eigen | Springer by Manfred Eigen and Peter Schuster (Springer, 1979)
This book originated from a series of papers which were published in "Die Naturwissenschaften" in 1977178. Its division into three parts is the reflection of a logic structure, which may be abstracted in the form of three theses:

A. Hypercycles are a principle of natural self-organization allowing an inte­gration and coherent evolution of a set of functionally coupled self-rep­licative entities.

B. Hypercycles are a novel class of nonlinear reaction networks with unique properties, amenable to a unified mathematical treatment.

C. Hypercycles are able to originate in the mutant distribution of a single Darwinian quasi-species through stabilization of its diverging mutant genes. Once nucleated hypercycles evolve to higher complexity by a process analogous to gene duplication and specialization. In order to outline the meaning of the first statement we may refer to another principle of material self organization, namely to Darwin's principle of natural selection. This principle as we see it today represents the only understood means for creating information, be it the blue print for a complex living organism which evolved from less complex ancestral forms, or be it a meaningful sequence of letters the selection of which can be simulated by evolutionary model games.

Part A in .pdf format.

Syndicated copies to:

🔖 Energy flow and the organization of life | Complexity

Energy flow and the organization of life by Harold Morowitz and Eric Smith (Complexity, September 2007)
Understanding the emergence and robustness of life requires accounting for both chemical specificity and statistical generality. We argue that the reverse of a common observation—that life requires a source of free energy to persist—provides an appropriate principle to understand the emergence, organization, and persistence of life on earth. Life, and in particular core biochemistry, has many properties of a relaxation channel that was driven into existence by free energy stresses from the earth's geochemistry. Like lightning or convective storms, the carbon, nitrogen, and phosphorus fluxes through core anabolic pathways make sense as the order parameters in a phase transition from an abiotic to a living state of the geosphere. Interpreting core pathways as order parameters would both explain their stability over billions of years, and perhaps predict the uniqueness of specific optimal chemical pathways.

Download .pdf copy

[1]
H. Morowitz and E. Smith, “Energy flow and the organization of life,” Complexity, vol. 13, no. 1. Wiley-Blackwell, pp. 51–59, 2007 [Online]. Available: http://dx.doi.org/10.1002/cplx.20191
Syndicated copies to:

🔖 How Life (and Death) Spring From Disorder | Quanta Magazine

How Life (and Death) Spring From Disorder by Philip Ball (Quanta Magazine)
Life was long thought to obey its own set of rules. But as simple systems show signs of lifelike behavior, scientists are arguing about whether this apparent complexity is all a consequence of thermodynamics.

This is a nice little general interest article by Philip Ball that does a relatively good job of covering several of my favorite topics (information theory, biology, complexity) for the layperson. While it stays relatively basic, it links to a handful of really great references, many of which I’ve already read, though several appear to be new to me. [1][2][3][4][5][6][7][8][9][10]

While Ball has a broad area of interests and coverage in his work, he’s certainly one of the best journalists working in this subarea of interests today. I highly recommend his work to those who find this area interesting.

References

[1]
E. Mayr, What Makes Biology Unique? Cambridge University Press, 2004.
[2]
A. Wissner-Gross and C. Freer, “Causal entropic forces.,” Phys Rev Lett, vol. 110, no. 16, p. 168702, Apr. 2013. [PubMed]
[3]
A. Barato and U. Seifert, “Thermodynamic uncertainty relation for biomolecular processes.,” Phys Rev Lett, vol. 114, no. 15, p. 158101, Apr. 2015. [PubMed]
[4]
J. Shay and W. Wright, “Hayflick, his limit, and cellular ageing.,” Nat Rev Mol Cell Biol, vol. 1, no. 1, pp. 72–6, Oct. 2000. [PubMed]
[5]
X. Dong, B. Milholland, and J. Vijg, “Evidence for a limit to human lifespan,” Nature, vol. 538, no. 7624. Springer Nature, pp. 257–259, 05-Oct-2016 [Online]. Available: http://dx.doi.org/10.1038/nature19793
[6]
H. Morowitz and E. Smith, “Energy Flow and the Organization of Life,” Santa Fe Institute, 07-Aug-2006. [Online]. Available: http://samoa.santafe.edu/media/workingpapers/06-08-029.pdf. [Accessed: 03-Feb-2017]
[7]
R. Landauer, “Irreversibility and Heat Generation in the Computing Process,” IBM Journal of Research and Development, vol. 5, no. 3. IBM, pp. 183–191, Jul-1961 [Online]. Available: http://dx.doi.org/10.1147/rd.53.0183
[8]
C. Rovelli, “Meaning = Information + Evolution,” arXiv, Nov. 2006 [Online]. Available: https://arxiv.org/abs/1611.02420
[9]
N. Perunov, R. A. Marsland, and J. L. England, “Statistical Physics of Adaptation,” Physical Review X, vol. 6, no. 2. American Physical Society (APS), 16-Jun-2016 [Online]. Available: http://dx.doi.org/10.1103/PhysRevX.6.021036 [Source]
[10]
S. Still, D. A. Sivak, A. J. Bell, and G. E. Crooks, “Thermodynamics of Prediction,” Physical Review Letters, vol. 109, no. 12. American Physical Society (APS), 19-Sep-2012 [Online]. Available: http://dx.doi.org/10.1103/PhysRevLett.109.120604 [Source]
Syndicated copies to:

🎧 Entanglement | Invisibilia (NPR)

Entanglement by Lulu Miller and Alix Spiegel (Invisibilia | NPR.org)
In Entanglement, you'll meet a woman with Mirror Touch Synesthesia who can physically feel what she sees others feeling. And an exploration of the ways in which all of us are connected — more literally than you might realize. The hour will start with physics and end with a conversation with comedian Maria Bamford and her mother. They discuss what it's like to be entangled through impersonation.

I can think of a few specific quirks I’ve got that touch tangentially on mirror synethesia. This story and some of the research behind it is truly fascinating. Particularly interesting are the ideas of the contagion of emotion. It would be interesting to take some complexity and network theory and add some mathematical models to see how this might look. In particular the recent political protests in the U.S. might make great models. This also makes me wonder where Donald Trump sits on this emotional empathy spectrum, if at all.

One of the more interesting take-aways: the thoughts and emotions of those around you can affect you far more than you imagine.

Four episodes in and this podcast is still impossibly awesome. I don’t know if I’ve had so many thought changing ideas since I read David Christian’s book Maps of Time: An Introduction to Big History[1] The sad problem is that I’m listening to them at a far faster pace than they could ever continue to produce them.

References

[1]
D. Christian, Maps of Time: An Introduction to Big History. Univ of California Press, 2004.
Syndicated copies to:

🎧 How to Become Batman | Invisibilia (NPR)

How to Become Batman by Lulu Miller and Alix Spiegel (Invisibilia | NPR.org)
In "How to Become Batman," Alix and Lulu examine the surprising effect that our expectations can have on the people around us. You'll hear how people's expectations can influence how well a rat runs a maze. Plus, the story of a man who is blind and says expectations have helped him see. Yes. See. This journey is not without skeptics.

Expectations are much more important than we think.

Is it possible that this podcast is getting more interesting as it continues along?! In three episodes, I’ve gone from fan to fanboy.

Syndicated copies to:

NIMBioS Tutorial: Uncertainty Quantification for Biological Models

NIMBioS Tutorial: Uncertainty Quantification for Biological Models (nimbios.org)
NIMBioS will host an Tutorial on Uncertainty Quantification for Biological Models

Uncertainty Quantification for Biological Models

Meeting dates: June 26-28, 2017
Location: NIMBioS at the University of Tennessee, Knoxville

Organizers:
Marisa Eisenberg, School of Public Health, Univ. of Michigan
Ben Fitzpatrick, Mathematics, Loyola Marymount Univ.
James Hyman, Mathematics, Tulane Univ.
Ralph Smith, Mathematics, North Carolina State Univ.
Clayton Webster, Computational and Applied Mathematics (CAM), Oak Ridge National Laboratory; Mathematics, Univ. of Tennessee

Objectives:
Mathematical modeling and computer simulations are widely used to predict the behavior of complex biological phenomena. However, increased computational resources have allowed scientists to ask a deeper question, namely, “how do the uncertainties ubiquitous in all modeling efforts affect the output of such predictive simulations?” Examples include both epistemic (lack of knowledge) and aleatoric (intrinsic variability) uncertainties and encompass uncertainty coming from inaccurate physical measurements, bias in mathematical descriptions, as well as errors coming from numerical approximations of computational simulations. Because it is essential for dealing with realistic experimental data and assessing the reliability of predictions based on numerical simulations, research in uncertainty quantification (UQ) ultimately aims to address these challenges.

Uncertainty quantification (UQ) uses quantitative methods to characterize and reduce uncertainties in mathematical models, and techniques from sampling, numerical approximations, and sensitivity analysis can help to apportion the uncertainty from models to different variables. Critical to achieving validated predictive computations, both forward and inverse UQ analysis have become critical modeling components for a wide range of scientific applications. Techniques from these fields are rapidly evolving to keep pace with the increasing emphasis on models that require quantified uncertainties for large-scale applications. This tutorial will focus on the application of these methods and techniques to mathematical models in the life sciences and will provide researchers with the basic concepts, theory, and algorithms necessary to quantify input and response uncertainties and perform sensitivity analysis for simulation models. Concepts to be covered may include: probability and statistics, parameter selection techniques, frequentist and Bayesian model calibration, propagation of uncertainties, quantification of model discrepancy, adaptive surrogate model construction, high-dimensional approximation, random sampling and sparse grids, as well as local and global sensitivity analysis.

This tutorial is intended for graduate students, postdocs and researchers in mathematics, statistics, computer science and biology. A basic knowledge of probability, linear algebra, and differential equations is assumed.

Descriptive Flyer

Application deadline: March 1, 2017
To apply, you must complete an application on our online registration system:

  1. Click here to access the system
  2. Login or register
  3. Complete your user profile (if you haven’t already)
  4. Find this tutorial event under Current Events Open for Application and click on Apply

Participation in NIMBioS tutorials is by application only. Individuals with a strong interest in the topic are encouraged to apply, and successful applicants will be notified within two weeks after the application deadline. If needed, financial support for travel, meals, and lodging is available for tutorial attendees.

Summary Report. TBA

Live Stream. The Tutorial will be streamed live. Note that NIMBioS Tutorials involve open discussion and not necessarily a succession of talks. In addition, the schedule as posted may change during the Workshop. To view the live stream, visit http://www.nimbios.org/videos/livestream. A live chat of the event will take place via Twitter using the hashtag #uncertaintyTT. The Twitter feed will be displayed to the right of the live stream. We encourage you to post questions/comments and engage in discussion with respect to our Social Media Guidelines.


Source: NIMBioS Tutorial: Uncertainty Quantification for Biological Models

Syndicated copies to:

Mathematical Model Reveals the Patterns of How Innovations Arise

Mathematicians have discovered how the universal patterns behind innovation arise by Emerging Technology from the arXiv (MIT Technology Review)
A mathematical model could lead to a new approach to the study of what is possible, and how it follows from what already exists.

Innovation is one of the driving forces in our world. The constant creation of new ideas and their transformation into technologies and products forms a powerful cornerstone for 21st century society. Indeed, many universities and institutes, along with regions such as Silicon Valley, cultivate this process.

And yet the process of innovation is something of a mystery. A wide range of researchers have studied it, ranging from economists and anthropologists to evolutionary biologists and engineers. Their goal is to understand how innovation happens and the factors that drive it so that they can optimize conditions for future innovation.

This approach has had limited success, however. The rate at which innovations appear and disappear has been carefully measured. It follows a set of well-characterized patterns that scientists observe in many different circumstances. And yet, nobody has been able to explain how this pattern arises or why it governs innovation.

Today, all that changes thanks to the work of Vittorio Loreto at Sapienza University of Rome in Italy and a few pals, who have created the first mathematical model that accurately reproduces the patterns that innovations follow. The work opens the way to a new approach to the study of innovation, of what is possible and how this follows from what already exists.

The notion that innovation arises from the interplay between the actual and the possible was first formalized by the complexity theorist Stuart Kauffmann. In 2002, Kauffmann introduced the idea of the “adjacent possible” as a way of thinking about biological evolution.
I know he discusses some of this in At Home in the Universe.

The adjacent possible is all those things—ideas, words, songs, molecules, genomes, technologies and so on—that are one step away from what actually exists. It connects the actual realization of a particular phenomenon and the space of unexplored possibilities.

But this idea is hard to model for an important reason. The space of unexplored possibilities includes all kinds of things that are easily imagined and expected but it also includes things that are entirely unexpected and hard to imagine. And while the former is tricky to model, the latter has appeared close to impossible.

What’s more, each innovation changes the landscape of future possibilities. So at every instant, the space of unexplored possibilities—the adjacent possible—is changing.

“Though the creative power of the adjacent possible is widely appreciated at an anecdotal level, its importance in the scientific literature is, in our opinion, underestimated,” say Loreto and co.

Nevertheless, even with all this complexity, innovation seems to follow predictable and easily measured patterns that have become known as “laws” because of their ubiquity. One of these is Heaps’ law, which states that the number of new things increases at a rate that is sublinear. In other words, it is governed by a power law of the form V(n) = knβ where β is between 0 and 1.

Words are often thought of as a kind of innovation, and language is constantly evolving as new words appear and old words die out.

This evolution follows Heaps’ law. Given a corpus of words of size n, the number of distinct words V(n) is proportional to n raised to the β power. In collections of real words, β turns out to be between 0.4 and 0.6.

Another well-known statistical pattern in innovation is Zipf’s law, which describes how the frequency of an innovation is related to its popularity. For example, in a corpus of words, the most frequent word occurs about twice as often as the second most frequent word, three times as frequently as the third most frequent word, and so on. In English, the most frequent word is “the” which accounts for about 7 percent of all words, followed by “of” which accounts for about 3.5 percent of all words, followed by “and,” and so on.

This frequency distribution is Zipf’s law and it crops up in a wide range of circumstances, such as the way edits appear on Wikipedia, how we listen to new songs online, and so on.

These patterns are empirical laws—we know of them because we can measure them. But just why the patterns take this form is unclear. And while mathematicians can model innovation by simply plugging the observed numbers into equations, they would much rather have a model which produces these numbers from first principles.

Enter Loreto and his pals (one of which is the Cornell University mathematician Steve Strogatz). These guys create a model that explains these patterns for the first time.

They begin with a well-known mathematical sand box called Polya’s Urn. It starts with an urn filled with balls of different colors. A ball is withdrawn at random, inspected and placed back in the urn with a number of other balls of the same color, thereby increasing the likelihood that this color will be selected in future.

This is a model that mathematicians use to explore rich-get-richer effects and the emergence of power laws. So it is a good starting point for a model of innovation. However, it does not naturally produce the sublinear growth that Heaps’ law predicts.

That’s because the Polya urn model allows for all the expected consequences of innovation (of discovering a certain color) but does not account for all the unexpected consequences of how an innovation influences the adjacent possible.

The upshot of the whole thing:
So Loreto, Strogatz, and co have modified Polya’s urn model to account for the possibility that discovering a new color in the urn can trigger entirely unexpected consequences. They call this model “Polya’s urn with innovation triggering.”

The exercise starts with an urn filled with colored balls. A ball is withdrawn at random, examined, and replaced in the urn.

If this color has been seen before, a number of other balls of the same color are also placed in the urn. But if the color is new—it has never been seen before in this exercise—then a number of balls of entirely new colors are added to the urn.

Loreto and co then calculate how the number of new colors picked from the urn, and their frequency distribution, changes over time. The result is that the model reproduces Heaps’ and Zipf’s Laws as they appear in the real world—a mathematical first. “The model of Polya’s urn with innovation triggering, presents for the first time a satisfactory first-principle based way of reproducing empirical observations,” say Loreto and co.

The team has also shown that its model predicts how innovations appear in the real world. The model accurately predicts how edit events occur on Wikipedia pages, the emergence of tags in social annotation systems, the sequence of words in texts, and how humans discover new songs in online music catalogues.

Interestingly, these systems involve two different forms of discovery. On the one hand, there are things that already exist but are new to the individual who finds them, such as online songs; and on the other are things that never existed before and are entirely new to the world, such as edits on Wikipedia.

Loreto and co call the former novelties—they are new to an individual—and the latter innovations—they are new to the world.

Curiously, the same model accounts for both phenomenon. It seems that the pattern behind the way we discover novelties—new songs, books, etc.—is the same as the pattern behind the way innovations emerge from the adjacent possible.

That raises some interesting questions, not least of which is why this should be. But it also opens an entirely new way to think about innovation and the triggering events that lead to new things. “These results provide a starting point for a deeper understanding of the adjacent possible and the different nature of triggering events that are likely to be important in the investigation of biological, linguistic, cultural, and technological evolution,” say Loreto and co.

We’ll look forward to seeing how the study of innovation evolves into the adjacent possible as a result of this work.

Ref: arxiv.org/abs/1701.00994: Dynamics on Expanding Spaces: Modeling the Emergence of Novelties

Source: Mathematical Model Reveals the Patterns of How Innovations Arise

Syndicated copies to:

🔖 Information theory, predictability, and the emergence of complex life

Information theory, predictability, and the emergence of complex life by Luís F. Seoane and Ricard Solé (arxiv.org)
Abstract: Despite the obvious advantage of simple life forms capable of fast replication, different levels of cognitive complexity have been achieved by living systems in terms of their potential to cope with environmental uncertainty. Against the inevitable cost associated to detecting environmental cues and responding to them in adaptive ways, we conjecture that the potential for predicting the environment can overcome the expenses associated to maintaining costly, complex structures. We present a minimal formal model grounded in information theory and selection, in which successive generations of agents are mapped into transmitters and receivers of a coded message. Our agents are guessing machines and their capacity to deal with environments of different complexity defines the conditions to sustain more complex agents.
Syndicated copies to:

Statistical Physics, Information Processing, and Biology Workshop at Santa Fe Institute

I just found out about this from John Carlos Baez and wish I could go! How have I not managed to have heard about it?

Stastical Physics, Information Processing, and Biology

Workshop

November 16, 2016 – November 18, 2016
9:00 AM
Noyce Conference Room

Abstract.
This workshop will address a fundamental question in theoretical biology: Does the relationship between statistical physics and the need of biological systems to process information underpin some of their deepest features? It recognizes that a core feature of biological systems is that they acquire, store and process information (i.e., perform computation). However to manipulate information in this way they require a steady flux of free energy from their environments. These two, inter-related attributes of biological systems are often taken for granted; they are not part of standard analyses of either the homeostasis or the evolution of biological systems. In this workshop we aim to fill in this major gap in our understanding of biological systems, by gaining deeper insight in the relation between the need for biological systems to process information and the free energy they need to pay for that processing.

The goal of this workshop is to address these issues by focusing on a set three specific question:

  1. How has the fraction of free energy flux on earth that is used by biological computation changed with time?;
  2. What is the free energy cost of biological computation / function?;
  3. What is the free energy cost of the evolution of biological computation / function.

In all of these cases we are interested in the fundamental limits that the laws of physics impose on various aspects of living systems as expressed by these three questions.

Purpose: Research Collaboration
SFI Host: David Krakauer, Michael Lachmann, Manfred Laubichler, Peter Stadler, and David Wolpert

Syndicated copies to:

Hector Zenil

A new paper (arXiv) and some videos on entropy and algorithmic complexity

I’ve run across some of his work before, but I ran into some new material by Hector Zenil that will likely interest those following information theory, complexity, and computer science here. I hadn’t previously noticed that he refers to himself on his website as an “information theoretic biologist” — everyone should have that as a title, shouldn’t they? As a result, I’ve also added him to the growing list of ITBio Researchers.

If you’re not following him everywhere (?) yet, start with some of the sites below (or let me know if I’ve missed anything).

Hector Zenil:

His most recent paper on arXiv:
Low Algorithmic Complexity Entropy-deceiving Graphs | .pdf

A common practice in the estimation of the complexity of objects, in particular of graphs, is to rely on graph- and information-theoretic measures. Here, using integer sequences with properties such as Borel normality, we explain how these measures are not independent of the way in which a single object, such a graph, can be described. From descriptions that can reconstruct the same graph and are therefore essentially translations of the same description, we will see that not only is it necessary to pre-select a feature of interest where there is one when applying a computable measure such as Shannon Entropy, and to make an arbitrary selection where there is not, but that more general properties, such as the causal likeliness of a graph as a measure (opposed to randomness), can be largely misrepresented by computable measures such as Entropy and Entropy rate. We introduce recursive and non-recursive (uncomputable) graphs and graph constructions based on integer sequences, whose different lossless descriptions have disparate Entropy values, thereby enabling the study and exploration of a measure’s range of applications and demonstrating the weaknesses of computable measures of complexity.

Subjects: Information Theory (cs.IT); Computational Complexity (cs.CC); Combinatorics (math.CO)
Cite as: arXiv:1608.05972 [cs.IT] (or arXiv:1608.05972v4 [cs.IT]

YouTube

Yesterday he also posted two new introductory videos to his YouTube channel. There’s nothing overly technical here, but they’re nice short productions that introduce some of his work. (I wish more scientists did communication like this.) I’m hoping he’ll post them to his blog and write a bit more there in the future as well.

Universal Measures of Complexity

Relevant literature:

Reprogrammable World

Relevant literature:

Cross-boundary Behavioural Reprogrammability Reveals Evidence of Pervasive Turing Universality by Jürgen Riedel, Hector Zenil
Preprint available at http://arxiv.org/abs/1510.01671

Ed.: 9/7/16: Updated videos with links to relevant literature

Syndicated copies to:

Randomness And Complexity, from Leibniz To Chaitin | World Scientific Publishing

Randomness And Complexity, from Leibniz To Chaitin by Cristian S. Calude (editor) (amzn.to)
The book is a collection of papers written by a selection of eminent authors from around the world in honour of Gregory Chaitin s 60th birthday. This is a unique volume including technical contributions, philosophical papers and essays. Hardcover: 468 pages; Publisher: World Scientific Publishing Company (October 18, 2007); ISBN: 9789812770820
Syndicated copies to:

Network Science by Albert-László Barabási

Network Science by Albert-László BarabásiAlbert-László Barabási (Cambridge University Press)

I ran across a link to this textbook by way of a standing Google alert, and was excited to check it out. I was immediately disappointed to think that I would have to wait another month and change for the physical textbook to be released, but made my pre-order directly. Then with a bit of digging around, I realized that individual chapters are available immediately to quench my thirst until the physical text is printed next month.

The power of network science, the beauty of network visualization.

Network Science, a textbook for network science, is freely available under the Creative Commons licence. Follow its development on Facebook, Twitter or by signing up to our mailing list, so that we can notify you of new chapters and developments.

The book is the result of a collaboration between a number of individuals, shaping everything, from content (Albert-László Barabási), to visualizations and interactive tools (Gabriele Musella, Mauro Martino, Nicole Samay, Kim Albrecht), simulations and data analysis (Márton Pósfai). The printed version of the book will be published by Cambridge University Press in 2016. In the coming months the website will be expanded with an interactive version of the text, datasets, and slides to teach the material.

Book Contents

Personal Introduction
1. Introduction
2. Graph Theory
3. Random Networks
4. The Scale-Free Property
5. The Barabási-Albert Model
6. Evolving Networks
7. Degree Correlations
8. Network Robustness
9. Communities
10. Spreading Phenomena
Usage & Acknowledgements
About

Albert-László Barabási
on Network Science (book website)

Networks are everywhere, from the Internet, to social networks, and the genetic networks that determine our biological existence. Illustrated throughout in full colour, this pioneering textbook, spanning a wide range of topics from physics to computer science, engineering, economics and the social sciences, introduces network science to an interdisciplinary audience. From the origins of the six degrees of separation to explaining why networks are robust to random failures, the author explores how viruses like Ebola and H1N1 spread, and why it is that our friends have more friends than we do. Using numerous real-world examples, this innovatively designed text includes clear delineation between undergraduate and graduate level material. The mathematical formulas and derivations are included within Advanced Topics sections, enabling use at a range of levels. Extensive online resources, including films and software for network analysis, make this a multifaceted companion for anyone with an interest in network science.

Source: Cambridge University Press

The textbook is available for purchase in September 2016 from Cambridge University Press. Pre-order now on Amazon.com.

If you’re not already doing so, you should follow Barabási on Twitter.

Syndicated copies to: