🔖 Introduction to Renormalization | Simon DeDeo | Complexity Explorer

Bookmarked Introduction to Renormalization by Simon DeDeo (Complexity Explorer)

What does a JPEG have to do with economics and quantum gravity? All of them are about what happens when you simplify world-descriptions. A JPEG compresses an image by throwing out fine structure in ways a casual glance won't detect. Economists produce theories of human behavior that gloss over the details of individual psychology. Meanwhile, even our most sophisticated physics experiments can't show us the most fundamental building-blocks of matter, and so our theories have to make do with descriptions that blur out the smallest scales. The study of how theories change as we move to more or less detailed descriptions is known as renormalization. 

This tutorial provides a modern introduction to renormalization from a complex systems point of view. Simon DeDeo will take students from basic concepts in information theory and image processing to some of the most important concepts in complexity, including emergence, coarse-graining, and effective theories. Only basic comfort with the use of probabilities is required for the majority of the material; some more advanced modules rely on more sophisticated algebra and basic calculus, but can be skipped. Solution sets include Python and Mathematica code to give more advanced learners hands-on experience with both mathematics and applications to data.

We'll introduce, in an elementary fashion, explicit examples of model-building including Markov Chains and Cellular Automata. We'll cover some new ideas for the description of complex systems including the Krohn-Rhodes theorem and State-Space Compression. And we'll show the connections between classic problems in physics, including the Ising model and plasma physics, and cutting-edge questions in machine learning and artificial intelligence.

Highlights, Quotes, Annotations, & Marginalia from Linked: The New Science Of Network by Albert-László Barabási

Annotated Linked: The New Science Of Networks by Albert-László Barabási (Perseus Books Group)

Highlights, Quotes, Annotations, & Marginalia

Guide to highlight colors

Yellow–general highlights and highlights which don’t fit under another category below
Orange–Vocabulary word; interesting and/or rare word
Green–Reference to read
Blue–Interesting Quote
Gray–Typography Problem
Red–Example to work through

The First Link: Introduction

…the high barriers to becoming a Christian had to be abolished. Circumcision and the strict food laws had to be relaxed.

Highlight (yellow) – page 4

make it easier to create links!

The Second Link: The Random Universe

But when you add enough links such that each node has an average of one link, a miracle happens: A unique giant cluster emerges.

Highlight (yellow) – page 17

Random network theory tells us that as the average number of links per node increases beyond the critical one, the number of nodes left out of the giant cluster decreases exponentially.

Highlight (yellow) – page 19

If the network is large, despite the links’ completely random placement, almost all nodes will have approximately the same number of links.

Highlight (yellow) – page 22

seminal 1959 paper of Erdős and Rényi to bookmark

Highlight (green) – page 23

“On Random Graphs. I” (PDF). Publicationes Mathematicae. 6: 290–297.

The Third Link: Six Degrees of Separation

In Igy irtok ti, or This is How You Write, Frigyes Karinthy

Highlight (yellow) – page 25

But there is one story, entitled “Lancszemek,” or “Chains,” that deserves our attention

Highlight (yellow) – page 26

Karinthy’s 1929 insight that people are linked by at most five links was the first published appearance of the concept we know today as “six degrees of separation.”

Highlight (yellow) – page 27

He [Stanley Milgram] did not seem to have been aware of the body of work on networks in graph theory and most likely had never heard of Erdős and Rényi. He is known to have been influenced by the work of Ithel de Sole Pool of MIT and Manfred Kochen of IBM, who circulated manuscripts about the small world problem within a group of colleagues for decades without publishing them, because they felt they had never “broken the back of the problem.”

Highlight (yellow) – page 36

Think about the small world problem of published research.

We don’t have a social search engine so we may never know the real number with total certainty.

Highlight (yellow) – page 39

Facebook has fixed this in the erstwhile. As of 2016 it’s down to 3.57 degrees of separation

social network

Highlight (orange) – page 40

google the n-gram of this word to see it’s incidence over time. How frequent was it when this book was written? It was apparently a thing beginning in the mid 1960’s.

The Fourth Links: Small Worlds

Mark Newman, a physicist at the Santa Fe Institute… had already written several papers on small worlds that are now considered classics.

Highlight (yellow) – page 49

Therefore, Watts and Strogatz’s most important discovery is that clustering does not stop at the boundary of social networks.

Highlight (yellow) – page 50

To explain the ubiquity of clustering in most real networks, Watts and Strogatz offered an alternative to Erdős and Rényi’s random network model in their 1998 study published in Nature.

Highlight (green) – page 51

Watts, D. J.; Strogatz, S. H. (1998). “Collective dynamics of ‘small-world’ networks” (PDF). Nature. 393 (6684): 440–442. Bibcode:1998Natur.393..440W. doi:10.1038/30918. PMID 9623998

The Fifth Link: Hubs and Connectors

The most intriguing result of our Web-mapping project was the complete absence of democracy, fairness, and egalitarian values on the Web. We learned that the topology of the Web prevents us from seeing anything but a mere handful of the billion documents out there.

Highlight (yellow) – page 56

Do Facebook and Twitter subvert some of this effect? What types of possible solutions could this give to the IndieWeb for social networking models with healthier results?

On the Web, the measure of visibility is the number of links. The more incoming links pointing to your Webpage, the more visible it is. […] Therefore, the liklihood that a typical document links to your Webpage is close to zero.

Highlight (yellow) – page 57

The hubs are the strongest argument against the utopian vision of an egalitarian cyberspace. […] In a collective manner, we somehow create hubs, Websites to which everyone links. They are very easy to find, no matter where you are on the Web. Compared to these hubs, the rest of the Web is invisible.

Highlight (yellow) – page 58

Every four years the United States inaugurates a new social hub–the president.

Highlight (yellow) – page 63
The Sixth Link: The 80/20 Rule

But every time an 80/20 rule truly applies, you can bet that there is a power law behind it. […] Power laws rarely emerge in systems completely dominated bya roll of the dice. Physicists have learned that most often they signal a transition from disorder to order.

Highlight (yellow) – page 72

If the disorder to order is the case, then what is the order imposed by earthquakes which apparently work on a power law distribution?

Leo Kadanoff, a physicist at the University of Illinois at Urbana, had a sudden insight: In the vicinity of the critical point we need to stop viewing atoms separately. Rather, they should be considered communities that act in unison. Atoms must be replaced by boxes of atoms such that within each box all atoms behave as one.

Highlight (yellow) – page 75

#phase transitions

Kenneth Wilson […] submitted simultaneously on June 2, 1971, and published in November of the same year by Physical Review B, turned statistical physics around. The proposed an elegant and all-encompassing theory of phase transitions. Wilson took the scaling ideas developed by Kadanoff and molded them into a powerful theory called renormalization. The starting point of his approach was scale invariance: He assumed that in the vicinity of the critical point the laws of physics applied in an identical manner at all scales, from single atoms to boxes containing millions of identical atoms acting in unison. By giving rigorous mathematical foundation to scale invariance, his theory spat out power laws each time he approached the critical point, the place where disorder makes room for order.

Highlight (yellow) – page 76-77
The Seventh Link: Rich Get Richer

The random model of Erdős and Rényi rests on two simple and often disregarded assumptions. First, we start with an inventory of nodes. Having all the nodes available from the beginning, we assume that the number of nodes is fixed and remains unchanged throughout the network’s life. Second, all nodes are equivalent. Unable to distinguish between the nodes, we link them randomly to each other. These assumptions were unquestioned in over forty years of network research.

Highlight (yellow) – page 81

Both in the Erdős-Rényi and Watts-Strogatz models assumed that we have a fixed number of nodes that are wired together in some clever way. The networks generated by these models are therefore static, meaning that the number of nodes remains unchanged during the network’s life. In contrast, our examples suggested that for real networks the static hypothesis is not appropriate. Instead, we should incorporate growth into our network models.

Highlight (yellow) – page 83

It demonstrated, however, that growth alone cannot explain the emergence of power laws.

Highlight (yellow) – page 84

They are hubs. The better known they are, the more links point to them. The more links they attract, the easier it is to find them on the Web and so the more familiar we are with them. […] The bottom line is that when deciding where to link on the Web, we follow preferential attachment: When choosing between two pages, one with twice as many links as the other, about twice as many people link to the more connected page. While our individual choices are highly unpredictable, as a group we follow strict patterns.

Highlight (yellow) – page 85

The model is very simple, as growth and preferential attachment lead to an algorithm defined by two straightforward rules:
A. Growth: For each given period of time we add a new node to the network. This step underscores the fact that networks are assembled one node at a time.
B. Preferential attachment: We assume that each new node connects to the existing nodes with two links. The probability that it will chose a given node is proportional to the numver of links the chosen node has. That is, given the choice between two nodes, one with twice as many links as the other, it is twice as likely that the new node will connect to the more connected node.

Highlight (yellow) – page 86

The how and why remain for each are of application though.

In Hollywood, 94 percent of links are internal, formed when two established actors work together for the first time.

Highlight (yellow) – page 89

These shifts in thinking created a set of opposites: static versus growing, random versus scale-free, structure versus evolution.
[…] Does the presence of power laws imply that real networks are the result of a phase transition from disorder to order? The answer we’ve arrived at is simple: Networks are not en route from a random to an ordered state. Neither are they at the edge of randomness and chaos. Rather, the scale-free topology is evidence of organizing principles acting at each stage of the network formation process. There is little mystery here, since growth and preferential attachment can explain the basic features of the networks see in nature. No matter how large and complex a network becomes, as long as preferential attachment and growth are present it will maintain its hub-dominated scale-free topology.

Highlight (yellow) – page 91
The Eighth Link: Einstein’s Legacy

The introduction of fitness does not eliminate growth and preferential attachment, the two basic mechanisms governing network evolution. It changes, however, what is considered attractive in a competitive environment. In the scale-free model, we assumed that a node’s attractiveness was determined solely by it’s number of links. In a competitive environment, fitness also plays a role: Nodes with higher fitness are linked to more frequently. A simple way to incorporate fitness into the scal-free model is to assume that preferential attachment is driven by the product of the node’s fitness and the number of links it has. Each new node decides where to link by comparing the fitness connectivity product of all available nodes and linking with a higher probability to those that have a higher product and therefore are more attractive.

Highlight (yellow) – page 96

Bianconi’s calculation s first confirmed our suspicion that in the presence of fitness the early bird is not necessarily the winner. Rather, fitness is in the driver’s seat, making or breaking the hubs.

Highlight (yellow) – page 97

But there was a indeed a precise mathematical mapping between the fitness model of a Bose gas. According to this mapping, each node in the network corresponds to an energy level in the Bose gas.

Highlight (yellow) – page 101

…in some networks, the winner can take all. Just as in a Bose-Einstein condensate all particles crowd into the the lowest energy level, leaving the rest of the energy levels unpopulated, in some networks the fittest node could theoretically grab all the links, leaving none for the rest of the nodes. The winner takes all.

Highlight (yellow) – page 102

But even though each system, from the Web to Hollywood, has a unique fitness distribution, Bianconi’s calculation indicated that in terms of topology all networks fall into one of only two possible categories. […] The first category includes all networks in which, despite the fierce competition for links, the scale-free topology survives. These networks display a fit-get-rich behavior, meaning that the fittest node will inevitably grow to beome the biggest hub. The winner’s lead is never significant, however. The largest hub is closely followed by a smaller one, which acquires almost as many links as the fittest node. Ata any moment we have a hierarchy of nodes whose degree distribution follows a power law. In most complex networks, the power laws and the fight for links thus are not antagonistic but can coexist peacefully.

Highlight (yellow) – page 102

In […] the second category, the winner takes all, meaning tht the fittest node grabs all the links, leaving very little for the rest of the nodes. Such networks develop a star topology. […] A winner-takes-all network is not scale-free.

Highlight (yellow) – page 102
The Ninth Link: Achilles’ Heel

…the western blackout highlighted an often ignored property of complex networks: vulnerability due to interconnectivity

Highlight (yellow) – page 110

Yet, if the number of removed nodes reaches a critical point, the system abruptly breaks into tiny unconnected islands.

Highlight (yellow) – page 112

Computer simulations we performed on networks generated by the scale-free model indicated that a significant fraction of nodes can be randomly removed from any scale-free network without its breaking apart.

Highlight (yellow) – page 113

…percolation theory, the field of physics that developed a set of tools that now are widely used in studies of random networks.

Highlight (yellow) – page 114

…they set out to calculate the fraction of nodes that must be removed from an arbitrarily chosen network, random or scale-free, to break it into pieces. On one hand, their calculation accounted for the well-known result that random networks fall apart after a critical number of nodes have been removed. On the other hand, they found that for scale-free networks the critical threshold disapears in cases where the degree exponent is smaller or equal to three.

Highlight (yellow) – page 114

Disable a few of the hubs and a scale-free network will fall to pieces in no time.

Highlight (yellow) – page 117

If, however, a drug or an illness shuts down the genes encoding the most connected proteins, the cell will not survive.

Highlight (yellow) – page 118

Obviously, the likelihood that a local failure will handicap the whole system is much higher if we perturb the most-connected nodes. This was supported by the findings of Duncan Watts, from Columbia University, who investigated a model designed to capture the generic features of cascading failures, such as power outages, and the opposite phenomenon, the cascading popularity of books, movies, and albums, which can be described within the same framework.

Highlight (yellow) – page 120-121
The Tenth Link: Viruses and Fads

If a new product passes the crucial test of the innovators, based on their recommendation, the early adopters will pick it up.

Highlight (yellow) – page 128

What, if any, role is played by the social network in the spread of a virus or an innovation?

Highlight (yellow) – page 128

In 1954, Elihu Katz, a researcher at the Bureau of Applied Social Research at columbia University, circulated a proposal to study the effect of social ties on behavior.

Highlight (yellow) – page 128

When it came to the spread of tetracyclin, the doctors named by three or more other doctors as friends were three times more likely to adopt the new drug than those who had not been named by anybody.

Highlight (yellow) – page 129

Hubs, often referred to in marketing as “opinion leaders,” “power users,” or “influencers,” are individuals who communicate with more people about a certain product than does the average person.

Highlight (yellow) – page 129

Aiming to explain the disappearance of some fads and viruses and the spread of others, social scientists and epidemiologists developed a very useful tool called the threshold model.

Highlight (yellow) – page 131

any relation to Granovetter?

…critical threshold, a quantity determined by the properties of the network in which the innovation spreads.

Highlight (yellow) – page 131

For decades, a simple but powerful paradigm dominated our treatment of diffusion problems. If we wanted to estimate the probability that an innovation would spread, we needed only to know it’s spreading rate and the critical threshold it faced. Nobody questioned this paradigm. Recently, however, we have learned that some viruses and innovations are oblivious to it.

Highlight (yellow) – page 132

On the Internet, computers are not connected to each other randomly.

Highlight (yellow) – page 135

In scale-free networks the epidemic threshold miraculously vanished!

Highlight (yellow) – page 135

Hubs are among the first infected thanks to their numerous sexual contacts. Once infected, they quickly infect hundreds of others. If our sex web formed a homogeneous, random, network, AIDS might have died out long ago. The scale-free topology at AIDS’s disposal allowed the virus to spread and persist.

Highlight (yellow) – page 138

As we’ve established, hubs play a key role in these processes. Their unique role suggest a bold but cruel solution: As long as resources are finite we should treat only the hubs. That is, when a treatment exists but there is not enough money to offer it to everybody who needs it, we should primarily give it to the hubs. (Pastor-Satorras and Vespignani; and Zoltan Dezso)

Highlight (yellow) – page 139

Are we prepared to abandon the less connected patients for the benefit of the population at large?

Highlight (yellow) – page 140
The Eleventh Link: The Awakening Internet

They [Michalis Faloutsos, Petros Faloutsos, and Christos Faloutsos] found that the connectivity distribution of the Internet routers follows a power law. In their seminar paper “On Power-Law Relationship of the Internet Topology” they showed that the Internet […] is a scale-free network.

Highlight (yellow) – page 150

Routers offering more bandwidth likely have more links as well. […] This simple effect is a possible source of preferential attachment. We do not know for sure whether it is the only one, but preferential attachment is unquestionably present on the Internet.

Highlight (yellow) – page 152

After many discussions and tutorials on how computers communicate, a simple but controversial idea emerged: parasitic computing.

Highlight (yellow) – page 156
The Twelfth Link: The Fragmented Web

Starting from any page (on the Internet), we can reach only about 24 percent of all documents.

Highlight (yellow) – page 165

If you want to go from A to D, you can start from node A, then go to node B, which has a link to node C, which points to D. But you can’t make a round-trip.

Highlight (yellow) – page 166

Not necessarily the case with bidirectional webmentions.

[Cass] Sustein fears that by limiting access to conflicting viewpoints, the emerging online universe encourages segregation and social fragmentation. Indeed, the mechanisms behind social and political isolation on the Web are self-reinforcing.

Highlight (yellow) – page 170

Looks like we’ve known this for a very long time! Sadly it’s coming to a head in the political space of 2016 onward.

Communities are essential components of human social history. Granovetter’s circles of friends, the elementary building blocks of communities, pointed to this fact. […]

Highlight (yellow) – page 170

early indications that Facebook could be a thing…

One reason is that there are no sharp boundaries between various communities. Indeed, the same Website can belong simultaneously to different groups. For example, a physicist’s Webpage might mix links to physics, music, and mountain climbing, combining professional interests with hobbies. In which community should we place such a page? The size of communities also varies a lot. For example, while the community interested in “cryptography” is small and relatively easy to locate, the one consisting of devotees of “English literature” is much harder to identify and fragmented into many subcommunities ranging from Shakespeare enghusiasts to Kurt Vonnegut fans.

Highlight (yellow) – page 171

Search for this type of community problem is an NP complete problem. This section may be of interest to Brad Enslen and Kicks Condor. Cross reference research suggested by Gary Flake, Steve Lawrence, and Lee Giles from NEC.

Such differences in the structure of competing communities have important consequences for their ability to market and organize themselves for a common cause.

Highlight (yellow) – page 172

He continues to talk about how the pro-life movement is better connected and therefore better equipped to fight against the pro-choice movement.

Code–or software–is the bricks and mortar of cyberspace. The architecture is what we build, using the code as building blocks. The great architects of human history, from Michelangelo to Frank Lloyd Wright, demonstrated that, whereas raw materials are limited, the architectural possibilities are not. Code can curtail behavior, and it does influence architecture. It does not uniquely determine it, however.

Highlight (yellow) – page 174

Added on November 3, 2018 at 5:26 PM

Yes, we do have free speech on the Web. Chances are, however, that our voices are too weak to be heard. pages with only a few incoming links are impossible to find by casual browsing. Instead, over and over we are steered toward the hubs. It is tempting to believe that robots can avoid this popularity-driven trap.

Highlight (yellow) – page 174

Facebook and Twitter applications? Algorithms help to amplify “unheard” voices to some extent, but gamifying the reading can also get people to read more (crap) than they were reading before because it’s so easy.

Your ability to find my Webpage is determined by one factor only: its position on the Web.

Highlight (yellow) – page 175

Facebook takes advantage of this with their algorithm

Thus the Web’s large-scale topology–that is, its true architecture–enforces more severe limitations on our behavior and visibilityon the Web than government or industry could ever achieve by tinkering with the code. Regulations come and go, but the topology and the fundamental natural laws governing it are time invariant. As long as we continue to delegate to the individual the choice of where to link, we will not be able to significantly alter the Web’s large-scale topology, and we will have to live with the consequences.

Highlight (yellow) – page TK175

hmmm?

After selling Alexa to Amazon.com in 1999

Highlight (yellow) – page 175

Brewster Kahle’s Alexa Internet company is apparently the root of the Amazon Alexa?

The Thirteenth Link: The Map of Life

To return to our car analogy, it is…

Highlight (yellow) – page 181

Where before? I don’t recall this at all. Did it get removed from the text?

Annotation (yellow) – page 183

ref somewhere about here… personalized medicine

After researching the available databases, we settled on a new one, run by the Argonne National Laboratory outside Chicago, nicknamed “What Is There?” which compiled the matabolic network of forty-three diverse organisms.

Highlight (yellow) – page 185

…for the vast majority of organisms the ten most-connected molecules are the same. Adenosine triphosphate (ATP) is almost always the biggest hub, followed closely by adenosine diphosphate (ADP) and water.

Highlight (yellow) – page 186

A key prediction of the scale-free model is that nodes with a large number of links are those that have been added early to the network. in terms of metabolism this would imply that the most connected molecules should be the oldest ones within the cell. […] Therefore, the first mover advantage seems to pervade the emergence of life as well.

Highlight (yellow) – page 186

Comparing the metabolic network of all forty-three organisms, we found that only 4 percent of the molecules appear in all of them.

Highlight (yellow) – page 186

Developed by Stanley Fields in 1989, the two-hybrid method offers a relatively rapid semiautomated technique for detecting protein-protein interactions.

Highlight (yellow) – page 188

They [the results of work by Oltvai, Jeong, Barabasi, Mason (2000)] demonstrated that the protein interaction network has a scale-free topology.

Highlight (yellow) – page 188

…the cell’s scale-free topology is a result of a common mistake cells make while reproducing.

Highlight (yellow) – page 189

In short, it is now clear that the number of genes is not proportional to our perceived complexity.

Highlight (yellow) – page 197
The Fourteenth Link: Network Economy

We have learned that a sparse network of a few powerful directors controls all major appointments in Fortune 1000 companies; […]

Highlight (yellow) – page 200

Regardless of industry and scope, the network behind all twentieth century corporations has the same structure: It is a tree, where the CEO occupies the root and the bifurcating branches represent the increasingly specialized and nonoverlapping tasks of lower-level managers and workers. Responsibility decays as you move down the branches, ending with the drone executors of orders conceived at the roots.

Highlight (yellow) – page 201

Only for completely top down , but what about bottom up or middle out?

We have gotten to the point that we can produce anything that we can dream of. The expensive question now is, what should that be?

Highlight (yellow) – page 201

It is a fundamental rethinking of how to respond to the new business environment in the postindustrial era, dubbed the information economy.

Highlight (orange) – page 201

This is likely late, but certainly an early instance of “information economy” in popular literature.

Therefore, companies aiming to compete in a fast-moving marketplace are shifting from a static and optimized tree into a dynamic and evolving web, offering a more malleable, flexible command structure.

Highlight (yellow) – page 202

While 79 percent of directors serve on only one board, 14 percent serve on two, and about 7 percent serve on three or more.

Highlight (yellow) – page 204

Indeed, the number of companies that entered in partnership with exactly k other institutions, representing the number of links they have within the network, followed a power law, the signature of a scale-free topology.

Highlight (yellow) – page 207

Makes me wonder if the 2008 economic collapse could have been predicted by “weak” links?

As research, innovation, product development, and marketing become more and more specialized and divorced from each other, we are converging to a network economy in which strategic alliances and partnerships are the means for survival in all industries.

Highlight (yellow) – page 208

This is troubling in the current political climate where there is little if any trust or truth being spread around by the leader of the Republican party.

As Walter W. Powell writes in Neither Market nor Hierarchy: Network Forms of Organization, “in markets the standard strategy is to drive the hardest possible bargain on the immediate exchange. In networks, the preferred option is often creating indebtedness and reliance over the long haul.” Therefore, in a network economy, buyers and suppliers are not competitors but partners. The relationship between them is often very long lasting and stable.

Highlight (yellow) – page 208

Trump vs. Trump

The stability of these links allows companies to concentrate on their core business. If these partnerships break down, the effects can be severe. Most of the time failures handicap only the partners of the broken link. Occasionally, however, they send ripples through the whole economy. As we will see next, macroeconomic failures can throw entire nations into deep financial disarray, while failures in corporate partnerships can severly damage the jewels of the new economy.

Highlight (yellow) – page 209

In some sense this predicts the effects of the 2008 downturn.

outsourcing

Highlight (orange) – page 212

early use of the word?

A me attitude, where the companies immediate financial balance is the only factor, limits network thinking. Not understanding how the actions of one node affect other nodes easily cripples whole segments of the network.

Highlight (yellow) – page 212

Hierarchical thinking does not fit a network economy.

Highlight (yellow) – page 213
The Last Link: Web Without a Spider

We must help eliminate the need and desire of the nodes to form links to terrorist organizations by offering them a chance to belong to more constructive and meaningful webs.

Highlight (yellow) – page 214

And for poverty and gangs as well as immigration.

“Their work has a powerful philosophy: “revelation through concealment.” By hiding the details they allow us to focus entirely on the form. The wrapping sharpens our vision, making us more aware and observant, turning ordinary objects into monumental sculptures and architectural pieces.

Highlight (yellow) – page 225

not too dissimilar to the font I saw today for memory improvement

🎧 Episode 077 Exploring Artificial Intelligence with Melanie Mitchell | HumanCurrent

Listened to Episode 077: Exploring Artificial Intelligence with Melanie Mitchell by Haley Campbell-GrossHaley Campbell-Gross from HumanCurrent

What is artificial intelligence? Could unintended consequences arise from increased use of this technology? How will the role of humans change with AI? How will AI evolve in the next 10 years?

In this episode, Haley interviews leading Complex Systems Scientist, Professor of Computer Science at Portland State University, and external professor at the Santa Fe InstituteMelanie Mitchell. Professor Mitchell answers many profound questions about the field of artificial intelligence and gives specific examples of how this technology is being used today. She also provides some insights to help us navigate our relationship with AI as it becomes more popular in the coming years.

Melanie Mitchell

I knew Dr. Mitchell was working on a book during her hiatus, but didn’t know it was potentially coming out so soon! I loved her last book and can’t wait to get this one. Sadly, there’s no pre-order copies available at any of the usual suspects yet.

👓 Algorithmic Information Dynamics: A Computational Approach to Causality and Living Systems From Networks to Cells | Complexity Explorer | Santa Fe Institute

Read Algorithmic Information Dynamics: A Computational Approach to Causality and Living Systems From Networks to Cells (Complexity Explorer | Santa Fe Institute)

About the Course:

Probability and statistics have long helped scientists make sense of data about the natural world — to find meaningful signals in the noise. But classical statistics prove a little threadbare in today’s landscape of large datasets, which are driving new insights in disciplines ranging from biology to ecology to economics. It's as true in biology, with the advent of genome sequencing, as it is in astronomy, with telescope surveys charting the entire sky.

The data have changed. Maybe it's time our data analysis tools did, too.
During this three-month online course, starting June 11th, instructors Hector Zenil and Narsis Kiani will introduce students to concepts from the exciting new field of Algorithm Information Dynamics to search for solutions to fundamental questions about causality — that is, why a particular set of circumstances lead to a particular outcome.

Algorithmic Information Dynamics (or Algorithmic Dynamics in short) is a new type of discrete calculus based on computer programming to study causation by generating mechanistic models to help find first principles of physical phenomena building up the next generation of machine learning.

The course covers key aspects from graph theory and network science, information theory, dynamical systems and algorithmic complexity. It will venture into ongoing research in fundamental science and its applications to behavioral, evolutionary and molecular biology.

Prerequisites:
Students should have basic knowledge of college-level math or physics, though optional sessions will help students with more technical concepts. Basic computer programming skills are also desirable, though not required. The course does not require students to adopt any particular programming language for the Wolfram Language will be mostly used and the instructors will share a lot of code written in this language that student will be able to use, study and exploit for their own purposes.

Course Outline:

  • The course will begin with a conceptual overview of the field.
  • Then it will review foundational theories like basic concepts of statistics and probability, notions of computability and algorithmic complexity, and brief introductions to graph theory and dynamical systems.
  • Finally, the course explores new measures and tools related to reprogramming artificial and biological systems. It will showcase the tools and framework in applications to systems biology, genetic networks and cognition by way of behavioral sequences.
  • Students will be able apply the tools to their own data and problems. The instructors will explain  in detail how to do this, and  will provide all the tools and code to do so.

The course runs 11 June through 03 September 2018.

Tuition is $50 required to get to the course material during the course and a certificate at the end but is is free to watch and if no fee is paid materials will not be available until the course closes. Donations are highly encouraged and appreciated in support for SFI's ComplexityExplorer to continue offering  new courses.

In addition to all course materials tuition includes:

  • Six-month access to the Wolfram|One platform (potentially renewable by other six) worth 150 to 300 USD.
  • Free digital copy of the course textbook to be published by Cambridge University Press.
  • Several gifts will be given away to the top students finishing the course, check the FAQ page for more details.

Best final projects will be invited to expand their results and submit them to the journal Complex Systems, the first journal in the field founded by Stephen Wolfram in 1987.

About the Instructor(s):

Hector Zenil has a PhD in Computer Science from the University of Lille 1 and a PhD in Philosophy and Epistemology from the Pantheon-Sorbonne University of Paris. He co-leads the Algorithmic Dynamics Lab at the Science for Life Laboratory (SciLifeLab), Unit of Computational Medicine, Center for Molecular Medicine at the Karolinska Institute in Stockholm, Sweden. He is also the head of the Algorithmic Nature Group at LABoRES, the Paris-based lab that started the Online Algorithmic Complexity Calculator and the Human Randomness Perception and Generation Project. Previously, he was a Research Associate at the Behavioural and Evolutionary Theory Lab at the Department of Computer Science at the University of Sheffield in the UK before joining the Department of Computer Science, University of Oxford as a faculty member and senior researcher.

Narsis Kiani has a PhD in Mathematics and has been a postdoctoral researcher at Dresden University of Technology and at the University of Heidelberg in Germany. She has been a VINNOVA Marie Curie Fellow and Assistant Professor in Sweden. She co-leads the Algorithmic Dynamics Lab at the Science for Life Laboratory (SciLifeLab), Unit of Computational Medicine, Center for Molecular Medicine at the Karolinska Institute in Stockholm, Sweden. Narsis is also a member of the Algorithmic Nature Group, LABoRES.

Hector and Narsis are the leaders of the Algorithmic Dynamics Lab at the Unit of Computational Medicine at Karolinska Institute.

TA:
Alyssa Adams has a PhD in Physics from Arizona State University and studies what makes living systems different from non-living ones. She currently works at Veda Data Solutions as a data scientist and researcher in social complex systems that are represented by large datasets. She completed an internship at Microsoft Research, Cambridge, UK studying machine learning agents in Minecraft, which is an excellent arena for simple and advanced tasks related to living and social activity. Alyssa is also a member of the Algorithmic Nature Group, LABoRES.

The development of the course and material offered has been supported by: 

  • The Foundational Questions Institute (FQXi)
  • Wolfram Research
  • John Templeton Foundation
  • Santa Fe Institute
  • Swedish Research Council (Vetenskapsrådet)
  • Algorithmic Nature Group, LABoRES for the Natural and Digital Sciences
  • Living Systems Lab, King Abdullah University of Science and Technology.
  • Department of Computer Science, Oxford University
  • Cambridge University Press
  • London Mathematical Society
  • Springer Verlag
  • ItBit for the Natural and Computational Sciences and, of course,
  • the Algorithmic Dynamics lab, Unit of Computational Medicine, SciLifeLab, Center for Molecular Medicine, The Karolinska Institute

Class Introduction:Class IntroductionHow to use Complexity Explorer:How to use Complexity Explorer

Course dates: 11 Jun 2018 9pm PDT to 03 Sep 2018 10pm PDT


Syllabus

  1. A Computational Approach to Causality
  2. A Brief Introduction to Graph Theory and Biological Networks
  3. Elements of Information Theory and Computability
  4. Randomness and Algorithmic Complexity
  5. Dynamical Systems as Models of the World
  6. Practice, Technical Skills and Selected Topics
  7. Algorithmic Information Dynamics and Reprogrammability
  8. Applications to Behavioural, Evolutionary and Molecular Biology

FAQ

Another interesting course from the SFI. Looks like an interesting way to spend the summer.

SFI and ASU to offer online M.S. in Complexity | Complexity Explorer

Bookmarked SFI and ASU to offer online M.S. in Complexity (Complexity Explorer)
SFI and Arizona State University soon will offer the world’s first comprehensive online master’s degree in complexity science. It will be the Institute’s first graduate degree program, a vision that dates to SFI’s founding. “With technology, a growing recognition of the value of online education, widespread acceptance of complexity science, and in partnership with ASU, we are now able to offer the world a degree in the field we helped invent,” says SFI President David Krakauer, “and it will be taught by the very people who built it into a legitimate domain of scholarship.”

Updated: Santa Fe feeling the effects of Trump’s policies

Read Updated: Santa Fe feeling the effects of Trump's policies by Andy Stiny (abqjournal.com)
Reports reveal that some invitees will now not travel to events in U.S.

Kenneth Arrow, Nobel-Winning Economist Whose Influence Spanned Decades, Dies at 95 | The New York Times

Read Kenneth Arrow, Nobel-Winning Economist Whose Influence Spanned Decades, Dies at 95 by Michael M. Weinstein (New York Times)
Professor Arrow, one of the most brilliant minds in his field during the 20th century, became the youngest economist ever to earn a Nobel at the age of 51.

🔖 How Life (and Death) Spring From Disorder | Quanta Magazine

Bookmarked How Life (and Death) Spring From Disorder by Philip Ball (Quanta Magazine)
Life was long thought to obey its own set of rules. But as simple systems show signs of lifelike behavior, scientists are arguing about whether this apparent complexity is all a consequence of thermodynamics.

This is a nice little general interest article by Philip Ball that does a relatively good job of covering several of my favorite topics (information theory, biology, complexity) for the layperson. While it stays relatively basic, it links to a handful of really great references, many of which I’ve already read, though several appear to be new to me. [1][2][3][4][5][6][7][8][9][10]

While Ball has a broad area of interests and coverage in his work, he’s certainly one of the best journalists working in this subarea of interests today. I highly recommend his work to those who find this area interesting.

References

[1]
E. Mayr, What Makes Biology Unique? Cambridge University Press, 2004.
[2]
A. Wissner-Gross and C. Freer, “Causal entropic forces.,” Phys Rev Lett, vol. 110, no. 16, p. 168702, Apr. 2013. [PubMed]
[3]
A. Barato and U. Seifert, “Thermodynamic uncertainty relation for biomolecular processes.,” Phys Rev Lett, vol. 114, no. 15, p. 158101, Apr. 2015. [PubMed]
[4]
J. Shay and W. Wright, “Hayflick, his limit, and cellular ageing.,” Nat Rev Mol Cell Biol, vol. 1, no. 1, pp. 72–6, Oct. 2000. [PubMed]
[5]
X. Dong, B. Milholland, and J. Vijg, “Evidence for a limit to human lifespan,” Nature, vol. 538, no. 7624. Springer Nature, pp. 257–259, 05-Oct-2016 [Online]. Available: http://dx.doi.org/10.1038/nature19793
[6]
H. Morowitz and E. Smith, “Energy Flow and the Organization of Life,” Santa Fe Institute, 07-Aug-2006. [Online]. Available: http://samoa.santafe.edu/media/workingpapers/06-08-029.pdf. [Accessed: 03-Feb-2017]
[7]
R. Landauer, “Irreversibility and Heat Generation in the Computing Process,” IBM Journal of Research and Development, vol. 5, no. 3. IBM, pp. 183–191, Jul-1961 [Online]. Available: http://dx.doi.org/10.1147/rd.53.0183
[8]
C. Rovelli, “Meaning = Information + Evolution,” arXiv, Nov. 2006 [Online]. Available: https://arxiv.org/abs/1611.02420
[9]
N. Perunov, R. A. Marsland, and J. L. England, “Statistical Physics of Adaptation,” Physical Review X, vol. 6, no. 2. American Physical Society (APS), 16-Jun-2016 [Online]. Available: http://dx.doi.org/10.1103/PhysRevX.6.021036 [Source]
[10]
S. Still, D. A. Sivak, A. J. Bell, and G. E. Crooks, “Thermodynamics of Prediction,” Physical Review Letters, vol. 109, no. 12. American Physical Society (APS), 19-Sep-2012 [Online]. Available: http://dx.doi.org/10.1103/PhysRevLett.109.120604 [Source]

Statistical Physics, Information Processing, and Biology Workshop at Santa Fe Institute

Bookmarked Information Processing and Biology by John Carlos Baez (Azimuth)
The Santa Fe Institute, in New Mexico, is a place for studying complex systems. I’ve never been there! Next week I’ll go there to give a colloquium on network theory, and also to participate in this workshop.

I just found out about this from John Carlos Baez and wish I could go! How have I not managed to have heard about it?

Stastical Physics, Information Processing, and Biology

Workshop

November 16, 2016 – November 18, 2016
9:00 AM
Noyce Conference Room

Abstract.
This workshop will address a fundamental question in theoretical biology: Does the relationship between statistical physics and the need of biological systems to process information underpin some of their deepest features? It recognizes that a core feature of biological systems is that they acquire, store and process information (i.e., perform computation). However to manipulate information in this way they require a steady flux of free energy from their environments. These two, inter-related attributes of biological systems are often taken for granted; they are not part of standard analyses of either the homeostasis or the evolution of biological systems. In this workshop we aim to fill in this major gap in our understanding of biological systems, by gaining deeper insight in the relation between the need for biological systems to process information and the free energy they need to pay for that processing.

The goal of this workshop is to address these issues by focusing on a set three specific question:

  1. How has the fraction of free energy flux on earth that is used by biological computation changed with time?;
  2. What is the free energy cost of biological computation / function?;
  3. What is the free energy cost of the evolution of biological computation / function.

In all of these cases we are interested in the fundamental limits that the laws of physics impose on various aspects of living systems as expressed by these three questions.

Purpose: Research Collaboration
SFI Host: David Krakauer, Michael Lachmann, Manfred Laubichler, Peter Stadler, and David Wolpert

Introduction to Information Theory | SFI’s Complexity Explorer

The Santa Fe Institute's free online course "Introduction to Information Theory" taught by Seth Lloyd via Complexity Explorer.

Many readers often ask me for resources for delving into the basics of information theory. I hadn’t posted it before, but the Santa Fe Institute recently had an online course Introduction to Information Theory through their Complexity Explorer, which has some other excellent offerings. It included videos, fora, and other resources and was taught by the esteemed physicist and professor Seth Lloyd. There are a number of currently active students still learning and posting there.

Introduction to Information Theory

About the Tutorial:

This tutorial introduces fundamental concepts in information theory. Information theory has made considerable impact in complex systems, and has in part co-evolved with complexity science. Research areas ranging from ecology and biology to aerospace and information technology have all seen benefits from the growth of information theory.

In this tutorial, students will follow the development of information theory from bits to modern application in computing and communication. Along the way Seth Lloyd introduces valuable topics in information theory such as mutual information, boolean logic, channel capacity, and the natural relationship between information and entropy.

Lloyd coherently covers a substantial amount of material while limiting discussion of the mathematics involved. When formulas or derivations are considered, Lloyd describes the mathematics such that less advanced math students will find the tutorial accessible. Prerequisites for this tutorial are an understanding of logarithms, and at least a year of high-school algebra.

About the Instructor(s):

Professor Seth Lloyd is a principal investigator in the Research Laboratory of Electronics (RLE) at the Massachusetts Institute of Technology (MIT). He received his A.B. from Harvard College in 1982, the Certificate of Advanced Study in Mathematics (Part III) and an M. Phil. in Philosophy of Science from Cambridge University in 1983 and 1984 under a Marshall Fellowship, and a Ph.D. in Physics in 1988 from Rockefeller University under the supervision of Professor Heinz Pagels.

From 1988 to 1991, Professor Lloyd was a postdoctoral fellow in the High Energy Physics Department at the California Institute of Technology, where he worked with Professor Murray Gell-Mann on applications of information to quantum-mechanical systems. From 1991 to 1994, he was a postdoctoral fellow at Los Alamos National Laboratory, where he worked at the Center for Nonlinear Systems on quantum computation. In 1994, he joined the faculty of the Department of Mechanical Engineering at MIT. Since 1988, Professor Lloyd has also been an adjunct faculty member at the Sante Fe Institute.

Professor Lloyd has performed seminal work in the fields of quantum computation and quantum communications, including proposing the first technologically feasible design for a quantum computer, demonstrating the viability of quantum analog computation, proving quantum analogs of Shannon’s noisy channel theorem, and designing novel methods for quantum error correction and noise reduction.

Professor Lloyd is a member of the American Physical Society and the Amercian Society of Mechanical Engineers.

Tutorial Team:

Yoav Kallus is an Omidyar Fellow at the Santa Fe Institute. His research at the boundary of statistical physics and geometry looks at how and when simple interactions lead to the formation of complex order in materials and when preferred local order leads to system-wide disorder. Yoav holds a B.Sc. in physics from Rice University and a Ph.D. in physics from Cornell University. Before joining the Santa Fe Institute, Yoav was a postdoctoral fellow at the Princeton Center for Theoretical Science in Princeton University.

How to use Complexity Explorer: How to use Complexity Explore
Prerequisites: At least one year of high-school algebra
Like this tutorial? 


Syllabus

  1. Introduction
  2. Forms of Information
  3. Information and Probability
  4. Fundamental Formula of Information
  5. Computation and Logic: Information Processing
  6. Mutual Information
  7. Communication Capacity
  8. Shannon’s Coding Theorem
  9. The Manifold Things Information Measures
  10. Homework