Tag: Santa Fe Institute
Added by PressForward
Personal news: I'm excited to announce that, starting in February, I will be in residence at the Santa Fe Institute for one year, as the inaugural Davis Professor of Complexity. I'm thrilled for this opportunity to work more closely with all the brilliant people at @sfiscience. pic.twitter.com/sjQ0QxiZGB
— Melanie Mitchell (@MelMitchell1) January 3, 2020
President and William H. Miller Professor of Complex Systems
🔖 Introduction to Renormalization | Simon DeDeo | Complexity Explorer
What does a JPEG have to do with economics and quantum gravity? All of them are about what happens when you simplify world-descriptions. A JPEG compresses an image by throwing out fine structure in ways a casual glance won't detect. Economists produce theories of human behavior that gloss over the details of individual psychology. Meanwhile, even our most sophisticated physics experiments can't show us the most fundamental building-blocks of matter, and so our theories have to make do with descriptions that blur out the smallest scales. The study of how theories change as we move to more or less detailed descriptions is known as renormalization.
This tutorial provides a modern introduction to renormalization from a complex systems point of view. Simon DeDeo will take students from basic concepts in information theory and image processing to some of the most important concepts in complexity, including emergence, coarse-graining, and effective theories. Only basic comfort with the use of probabilities is required for the majority of the material; some more advanced modules rely on more sophisticated algebra and basic calculus, but can be skipped. Solution sets include Python and Mathematica code to give more advanced learners hands-on experience with both mathematics and applications to data.
We'll introduce, in an elementary fashion, explicit examples of model-building including Markov Chains and Cellular Automata. We'll cover some new ideas for the description of complex systems including the Krohn-Rhodes theorem and State-Space Compression. And we'll show the connections between classic problems in physics, including the Ising model and plasma physics, and cutting-edge questions in machine learning and artificial intelligence.
Highlights, Quotes, Annotations, & Marginalia from Linked: The New Science Of Network by Albert-László Barabási
Highlights, Quotes, Annotations, & Marginalia
Guide to highlight colors
Yellow–general highlights and highlights which don’t fit under another category below
Orange–Vocabulary word; interesting and/or rare word
Green–Reference to read
Blue–Interesting Quote
Gray–Typography Problem
Red–Example to work through
…the high barriers to becoming a Christian had to be abolished. Circumcision and the strict food laws had to be relaxed.
make it easier to create links!
But when you add enough links such that each node has an average of one link, a miracle happens: A unique giant cluster emerges.
Random network theory tells us that as the average number of links per node increases beyond the critical one, the number of nodes left out of the giant cluster decreases exponentially.
If the network is large, despite the links’ completely random placement, almost all nodes will have approximately the same number of links.
seminal 1959 paper of Erdős and Rényi to bookmark
“On Random Graphs. I” (PDF). Publicationes Mathematicae. 6: 290–297.
In Igy irtok ti, or This is How You Write, Frigyes Karinthy
But there is one story, entitled “Lancszemek,” or “Chains,” that deserves our attention
Karinthy’s 1929 insight that people are linked by at most five links was the first published appearance of the concept we know today as “six degrees of separation.”
He [Stanley Milgram] did not seem to have been aware of the body of work on networks in graph theory and most likely had never heard of Erdős and Rényi. He is known to have been influenced by the work of Ithel de Sole Pool of MIT and Manfred Kochen of IBM, who circulated manuscripts about the small world problem within a group of colleagues for decades without publishing them, because they felt they had never “broken the back of the problem.”
Think about the small world problem of published research.
We don’t have a social search engine so we may never know the real number with total certainty.
Facebook has fixed this in the erstwhile. As of 2016 it’s down to 3.57 degrees of separation
social network
google the n-gram of this word to see it’s incidence over time. How frequent was it when this book was written? It was apparently a thing beginning in the mid 1960’s.
Mark Newman, a physicist at the Santa Fe Institute… had already written several papers on small worlds that are now considered classics.
Therefore, Watts and Strogatz’s most important discovery is that clustering does not stop at the boundary of social networks.
To explain the ubiquity of clustering in most real networks, Watts and Strogatz offered an alternative to Erdős and Rényi’s random network model in their 1998 study published in Nature.
Watts, D. J.; Strogatz, S. H. (1998). “Collective dynamics of ‘small-world’ networks” (PDF). Nature. 393 (6684): 440–442. Bibcode:1998Natur.393..440W. doi:10.1038/30918. PMID 9623998
The most intriguing result of our Web-mapping project was the complete absence of democracy, fairness, and egalitarian values on the Web. We learned that the topology of the Web prevents us from seeing anything but a mere handful of the billion documents out there.
Do Facebook and Twitter subvert some of this effect? What types of possible solutions could this give to the IndieWeb for social networking models with healthier results?
On the Web, the measure of visibility is the number of links. The more incoming links pointing to your Webpage, the more visible it is. […] Therefore, the liklihood that a typical document links to your Webpage is close to zero.
The hubs are the strongest argument against the utopian vision of an egalitarian cyberspace. […] In a collective manner, we somehow create hubs, Websites to which everyone links. They are very easy to find, no matter where you are on the Web. Compared to these hubs, the rest of the Web is invisible.
Every four years the United States inaugurates a new social hub–the president.
But every time an 80/20 rule truly applies, you can bet that there is a power law behind it. […] Power laws rarely emerge in systems completely dominated bya roll of the dice. Physicists have learned that most often they signal a transition from disorder to order.
If the disorder to order is the case, then what is the order imposed by earthquakes which apparently work on a power law distribution?
Leo Kadanoff, a physicist at the University of Illinois at Urbana, had a sudden insight: In the vicinity of the critical point we need to stop viewing atoms separately. Rather, they should be considered communities that act in unison. Atoms must be replaced by boxes of atoms such that within each box all atoms behave as one.
#phase transitions
Kenneth Wilson […] submitted simultaneously on June 2, 1971, and published in November of the same year by Physical Review B, turned statistical physics around. The proposed an elegant and all-encompassing theory of phase transitions. Wilson took the scaling ideas developed by Kadanoff and molded them into a powerful theory called renormalization. The starting point of his approach was scale invariance: He assumed that in the vicinity of the critical point the laws of physics applied in an identical manner at all scales, from single atoms to boxes containing millions of identical atoms acting in unison. By giving rigorous mathematical foundation to scale invariance, his theory spat out power laws each time he approached the critical point, the place where disorder makes room for order.
The random model of Erdős and Rényi rests on two simple and often disregarded assumptions. First, we start with an inventory of nodes. Having all the nodes available from the beginning, we assume that the number of nodes is fixed and remains unchanged throughout the network’s life. Second, all nodes are equivalent. Unable to distinguish between the nodes, we link them randomly to each other. These assumptions were unquestioned in over forty years of network research.
Both in the Erdős-Rényi and Watts-Strogatz models assumed that we have a fixed number of nodes that are wired together in some clever way. The networks generated by these models are therefore static, meaning that the number of nodes remains unchanged during the network’s life. In contrast, our examples suggested that for real networks the static hypothesis is not appropriate. Instead, we should incorporate growth into our network models.
It demonstrated, however, that growth alone cannot explain the emergence of power laws.
They are hubs. The better known they are, the more links point to them. The more links they attract, the easier it is to find them on the Web and so the more familiar we are with them. […] The bottom line is that when deciding where to link on the Web, we follow preferential attachment: When choosing between two pages, one with twice as many links as the other, about twice as many people link to the more connected page. While our individual choices are highly unpredictable, as a group we follow strict patterns.
The model is very simple, as growth and preferential attachment lead to an algorithm defined by two straightforward rules:
A. Growth: For each given period of time we add a new node to the network. This step underscores the fact that networks are assembled one node at a time.
B. Preferential attachment: We assume that each new node connects to the existing nodes with two links. The probability that it will chose a given node is proportional to the numver of links the chosen node has. That is, given the choice between two nodes, one with twice as many links as the other, it is twice as likely that the new node will connect to the more connected node.
The how and why remain for each are of application though.
In Hollywood, 94 percent of links are internal, formed when two established actors work together for the first time.
These shifts in thinking created a set of opposites: static versus growing, random versus scale-free, structure versus evolution.
[…] Does the presence of power laws imply that real networks are the result of a phase transition from disorder to order? The answer we’ve arrived at is simple: Networks are not en route from a random to an ordered state. Neither are they at the edge of randomness and chaos. Rather, the scale-free topology is evidence of organizing principles acting at each stage of the network formation process. There is little mystery here, since growth and preferential attachment can explain the basic features of the networks see in nature. No matter how large and complex a network becomes, as long as preferential attachment and growth are present it will maintain its hub-dominated scale-free topology.
The introduction of fitness does not eliminate growth and preferential attachment, the two basic mechanisms governing network evolution. It changes, however, what is considered attractive in a competitive environment. In the scale-free model, we assumed that a node’s attractiveness was determined solely by it’s number of links. In a competitive environment, fitness also plays a role: Nodes with higher fitness are linked to more frequently. A simple way to incorporate fitness into the scal-free model is to assume that preferential attachment is driven by the product of the node’s fitness and the number of links it has. Each new node decides where to link by comparing the fitness connectivity product of all available nodes and linking with a higher probability to those that have a higher product and therefore are more attractive.
Bianconi’s calculation s first confirmed our suspicion that in the presence of fitness the early bird is not necessarily the winner. Rather, fitness is in the driver’s seat, making or breaking the hubs.
But there was a indeed a precise mathematical mapping between the fitness model of a Bose gas. According to this mapping, each node in the network corresponds to an energy level in the Bose gas.
…in some networks, the winner can take all. Just as in a Bose-Einstein condensate all particles crowd into the the lowest energy level, leaving the rest of the energy levels unpopulated, in some networks the fittest node could theoretically grab all the links, leaving none for the rest of the nodes. The winner takes all.
But even though each system, from the Web to Hollywood, has a unique fitness distribution, Bianconi’s calculation indicated that in terms of topology all networks fall into one of only two possible categories. […] The first category includes all networks in which, despite the fierce competition for links, the scale-free topology survives. These networks display a fit-get-rich behavior, meaning that the fittest node will inevitably grow to beome the biggest hub. The winner’s lead is never significant, however. The largest hub is closely followed by a smaller one, which acquires almost as many links as the fittest node. Ata any moment we have a hierarchy of nodes whose degree distribution follows a power law. In most complex networks, the power laws and the fight for links thus are not antagonistic but can coexist peacefully.
In […] the second category, the winner takes all, meaning tht the fittest node grabs all the links, leaving very little for the rest of the nodes. Such networks develop a star topology. […] A winner-takes-all network is not scale-free.
…the western blackout highlighted an often ignored property of complex networks: vulnerability due to interconnectivity
Yet, if the number of removed nodes reaches a critical point, the system abruptly breaks into tiny unconnected islands.
Computer simulations we performed on networks generated by the scale-free model indicated that a significant fraction of nodes can be randomly removed from any scale-free network without its breaking apart.
…percolation theory, the field of physics that developed a set of tools that now are widely used in studies of random networks.
…they set out to calculate the fraction of nodes that must be removed from an arbitrarily chosen network, random or scale-free, to break it into pieces. On one hand, their calculation accounted for the well-known result that random networks fall apart after a critical number of nodes have been removed. On the other hand, they found that for scale-free networks the critical threshold disapears in cases where the degree exponent is smaller or equal to three.
Disable a few of the hubs and a scale-free network will fall to pieces in no time.
If, however, a drug or an illness shuts down the genes encoding the most connected proteins, the cell will not survive.
Obviously, the likelihood that a local failure will handicap the whole system is much higher if we perturb the most-connected nodes. This was supported by the findings of Duncan Watts, from Columbia University, who investigated a model designed to capture the generic features of cascading failures, such as power outages, and the opposite phenomenon, the cascading popularity of books, movies, and albums, which can be described within the same framework.
If a new product passes the crucial test of the innovators, based on their recommendation, the early adopters will pick it up.
What, if any, role is played by the social network in the spread of a virus or an innovation?
In 1954, Elihu Katz, a researcher at the Bureau of Applied Social Research at columbia University, circulated a proposal to study the effect of social ties on behavior.
When it came to the spread of tetracyclin, the doctors named by three or more other doctors as friends were three times more likely to adopt the new drug than those who had not been named by anybody.
Hubs, often referred to in marketing as “opinion leaders,” “power users,” or “influencers,” are individuals who communicate with more people about a certain product than does the average person.
Aiming to explain the disappearance of some fads and viruses and the spread of others, social scientists and epidemiologists developed a very useful tool called the threshold model.
any relation to Granovetter?
…critical threshold, a quantity determined by the properties of the network in which the innovation spreads.
For decades, a simple but powerful paradigm dominated our treatment of diffusion problems. If we wanted to estimate the probability that an innovation would spread, we needed only to know it’s spreading rate and the critical threshold it faced. Nobody questioned this paradigm. Recently, however, we have learned that some viruses and innovations are oblivious to it.
On the Internet, computers are not connected to each other randomly.
In scale-free networks the epidemic threshold miraculously vanished!
Hubs are among the first infected thanks to their numerous sexual contacts. Once infected, they quickly infect hundreds of others. If our sex web formed a homogeneous, random, network, AIDS might have died out long ago. The scale-free topology at AIDS’s disposal allowed the virus to spread and persist.
As we’ve established, hubs play a key role in these processes. Their unique role suggest a bold but cruel solution: As long as resources are finite we should treat only the hubs. That is, when a treatment exists but there is not enough money to offer it to everybody who needs it, we should primarily give it to the hubs. (Pastor-Satorras and Vespignani; and Zoltan Dezso)
Are we prepared to abandon the less connected patients for the benefit of the population at large?
They [Michalis Faloutsos, Petros Faloutsos, and Christos Faloutsos] found that the connectivity distribution of the Internet routers follows a power law. In their seminar paper “On Power-Law Relationship of the Internet Topology” they showed that the Internet […] is a scale-free network.
Routers offering more bandwidth likely have more links as well. […] This simple effect is a possible source of preferential attachment. We do not know for sure whether it is the only one, but preferential attachment is unquestionably present on the Internet.
After many discussions and tutorials on how computers communicate, a simple but controversial idea emerged: parasitic computing.
Starting from any page (on the Internet), we can reach only about 24 percent of all documents.
If you want to go from A to D, you can start from node A, then go to node B, which has a link to node C, which points to D. But you can’t make a round-trip.
Not necessarily the case with bidirectional webmentions.
[Cass] Sustein fears that by limiting access to conflicting viewpoints, the emerging online universe encourages segregation and social fragmentation. Indeed, the mechanisms behind social and political isolation on the Web are self-reinforcing.
Looks like we’ve known this for a very long time! Sadly it’s coming to a head in the political space of 2016 onward.
Communities are essential components of human social history. Granovetter’s circles of friends, the elementary building blocks of communities, pointed to this fact. […]
early indications that Facebook could be a thing…
One reason is that there are no sharp boundaries between various communities. Indeed, the same Website can belong simultaneously to different groups. For example, a physicist’s Webpage might mix links to physics, music, and mountain climbing, combining professional interests with hobbies. In which community should we place such a page? The size of communities also varies a lot. For example, while the community interested in “cryptography” is small and relatively easy to locate, the one consisting of devotees of “English literature” is much harder to identify and fragmented into many subcommunities ranging from Shakespeare enghusiasts to Kurt Vonnegut fans.
Search for this type of community problem is an NP complete problem. This section may be of interest to Brad Enslen and Kicks Condor. Cross reference research suggested by Gary Flake, Steve Lawrence, and Lee Giles from NEC.
Such differences in the structure of competing communities have important consequences for their ability to market and organize themselves for a common cause.
He continues to talk about how the pro-life movement is better connected and therefore better equipped to fight against the pro-choice movement.
Code–or software–is the bricks and mortar of cyberspace. The architecture is what we build, using the code as building blocks. The great architects of human history, from Michelangelo to Frank Lloyd Wright, demonstrated that, whereas raw materials are limited, the architectural possibilities are not. Code can curtail behavior, and it does influence architecture. It does not uniquely determine it, however.
Added on November 3, 2018 at 5:26 PM
Yes, we do have free speech on the Web. Chances are, however, that our voices are too weak to be heard. pages with only a few incoming links are impossible to find by casual browsing. Instead, over and over we are steered toward the hubs. It is tempting to believe that robots can avoid this popularity-driven trap.
Facebook and Twitter applications? Algorithms help to amplify “unheard” voices to some extent, but gamifying the reading can also get people to read more (crap) than they were reading before because it’s so easy.
Your ability to find my Webpage is determined by one factor only: its position on the Web.
Facebook takes advantage of this with their algorithm
Thus the Web’s large-scale topology–that is, its true architecture–enforces more severe limitations on our behavior and visibilityon the Web than government or industry could ever achieve by tinkering with the code. Regulations come and go, but the topology and the fundamental natural laws governing it are time invariant. As long as we continue to delegate to the individual the choice of where to link, we will not be able to significantly alter the Web’s large-scale topology, and we will have to live with the consequences.
hmmm?
After selling Alexa to Amazon.com in 1999
Brewster Kahle’s Alexa Internet company is apparently the root of the Amazon Alexa?
To return to our car analogy, it is…
Where before? I don’t recall this at all. Did it get removed from the text?
ref somewhere about here… personalized medicine
After researching the available databases, we settled on a new one, run by the Argonne National Laboratory outside Chicago, nicknamed “What Is There?” which compiled the matabolic network of forty-three diverse organisms.
…for the vast majority of organisms the ten most-connected molecules are the same. Adenosine triphosphate (ATP) is almost always the biggest hub, followed closely by adenosine diphosphate (ADP) and water.
A key prediction of the scale-free model is that nodes with a large number of links are those that have been added early to the network. in terms of metabolism this would imply that the most connected molecules should be the oldest ones within the cell. […] Therefore, the first mover advantage seems to pervade the emergence of life as well.
Comparing the metabolic network of all forty-three organisms, we found that only 4 percent of the molecules appear in all of them.
Developed by Stanley Fields in 1989, the two-hybrid method offers a relatively rapid semiautomated technique for detecting protein-protein interactions.
They [the results of work by Oltvai, Jeong, Barabasi, Mason (2000)] demonstrated that the protein interaction network has a scale-free topology.
…the cell’s scale-free topology is a result of a common mistake cells make while reproducing.
In short, it is now clear that the number of genes is not proportional to our perceived complexity.
We have learned that a sparse network of a few powerful directors controls all major appointments in Fortune 1000 companies; […]
Regardless of industry and scope, the network behind all twentieth century corporations has the same structure: It is a tree, where the CEO occupies the root and the bifurcating branches represent the increasingly specialized and nonoverlapping tasks of lower-level managers and workers. Responsibility decays as you move down the branches, ending with the drone executors of orders conceived at the roots.
Only for completely top down , but what about bottom up or middle out?
We have gotten to the point that we can produce anything that we can dream of. The expensive question now is, what should that be?
It is a fundamental rethinking of how to respond to the new business environment in the postindustrial era, dubbed the information economy.
This is likely late, but certainly an early instance of “information economy” in popular literature.
Therefore, companies aiming to compete in a fast-moving marketplace are shifting from a static and optimized tree into a dynamic and evolving web, offering a more malleable, flexible command structure.
While 79 percent of directors serve on only one board, 14 percent serve on two, and about 7 percent serve on three or more.
Indeed, the number of companies that entered in partnership with exactly k other institutions, representing the number of links they have within the network, followed a power law, the signature of a scale-free topology.
Makes me wonder if the 2008 economic collapse could have been predicted by “weak” links?
As research, innovation, product development, and marketing become more and more specialized and divorced from each other, we are converging to a network economy in which strategic alliances and partnerships are the means for survival in all industries.
This is troubling in the current political climate where there is little if any trust or truth being spread around by the leader of the Republican party.
As Walter W. Powell writes in Neither Market nor Hierarchy: Network Forms of Organization, “in markets the standard strategy is to drive the hardest possible bargain on the immediate exchange. In networks, the preferred option is often creating indebtedness and reliance over the long haul.” Therefore, in a network economy, buyers and suppliers are not competitors but partners. The relationship between them is often very long lasting and stable.
Trump vs. Trump
The stability of these links allows companies to concentrate on their core business. If these partnerships break down, the effects can be severe. Most of the time failures handicap only the partners of the broken link. Occasionally, however, they send ripples through the whole economy. As we will see next, macroeconomic failures can throw entire nations into deep financial disarray, while failures in corporate partnerships can severly damage the jewels of the new economy.
In some sense this predicts the effects of the 2008 downturn.
outsourcing
early use of the word?
A me attitude, where the companies immediate financial balance is the only factor, limits network thinking. Not understanding how the actions of one node affect other nodes easily cripples whole segments of the network.
Hierarchical thinking does not fit a network economy.
We must help eliminate the need and desire of the nodes to form links to terrorist organizations by offering them a chance to belong to more constructive and meaningful webs.
And for poverty and gangs as well as immigration.
“Their work has a powerful philosophy: “revelation through concealment.” By hiding the details they allow us to focus entirely on the form. The wrapping sharpens our vision, making us more aware and observant, turning ordinary objects into monumental sculptures and architectural pieces.
not too dissimilar to the font I saw today for memory improvement
🎧 Episode 077 Exploring Artificial Intelligence with Melanie Mitchell | HumanCurrent
What is artificial intelligence? Could unintended consequences arise from increased use of this technology? How will the role of humans change with AI? How will AI evolve in the next 10 years?
In this episode, Haley interviews leading Complex Systems Scientist, Professor of Computer Science at Portland State University, and external professor at the Santa Fe Institute, Melanie Mitchell. Professor Mitchell answers many profound questions about the field of artificial intelligence and gives specific examples of how this technology is being used today. She also provides some insights to help us navigate our relationship with AI as it becomes more popular in the coming years.
👓 Algorithmic Information Dynamics: A Computational Approach to Causality and Living Systems From Networks to Cells | Complexity Explorer | Santa Fe Institute
About the Course:
Probability and statistics have long helped scientists make sense of data about the natural world — to find meaningful signals in the noise. But classical statistics prove a little threadbare in today’s landscape of large datasets, which are driving new insights in disciplines ranging from biology to ecology to economics. It's as true in biology, with the advent of genome sequencing, as it is in astronomy, with telescope surveys charting the entire sky.
The data have changed. Maybe it's time our data analysis tools did, too.
During this three-month online course, starting June 11th, instructors Hector Zenil and Narsis Kiani will introduce students to concepts from the exciting new field of Algorithm Information Dynamics to search for solutions to fundamental questions about causality — that is, why a particular set of circumstances lead to a particular outcome.Algorithmic Information Dynamics (or Algorithmic Dynamics in short) is a new type of discrete calculus based on computer programming to study causation by generating mechanistic models to help find first principles of physical phenomena building up the next generation of machine learning.
The course covers key aspects from graph theory and network science, information theory, dynamical systems and algorithmic complexity. It will venture into ongoing research in fundamental science and its applications to behavioral, evolutionary and molecular biology.
Prerequisites:
Students should have basic knowledge of college-level math or physics, though optional sessions will help students with more technical concepts. Basic computer programming skills are also desirable, though not required. The course does not require students to adopt any particular programming language for the Wolfram Language will be mostly used and the instructors will share a lot of code written in this language that student will be able to use, study and exploit for their own purposes.Course Outline:
- The course will begin with a conceptual overview of the field.
- Then it will review foundational theories like basic concepts of statistics and probability, notions of computability and algorithmic complexity, and brief introductions to graph theory and dynamical systems.
- Finally, the course explores new measures and tools related to reprogramming artificial and biological systems. It will showcase the tools and framework in applications to systems biology, genetic networks and cognition by way of behavioral sequences.
- Students will be able apply the tools to their own data and problems. The instructors will explain in detail how to do this, and will provide all the tools and code to do so.
The course runs 11 June through 03 September 2018.
Tuition is $50 required to get to the course material during the course and a certificate at the end but is is free to watch and if no fee is paid materials will not be available until the course closes. Donations are highly encouraged and appreciated in support for SFI's ComplexityExplorer to continue offering new courses.
In addition to all course materials tuition includes:
- Six-month access to the Wolfram|One platform (potentially renewable by other six) worth 150 to 300 USD.
- Free digital copy of the course textbook to be published by Cambridge University Press.
- Several gifts will be given away to the top students finishing the course, check the FAQ page for more details.
Best final projects will be invited to expand their results and submit them to the journal Complex Systems, the first journal in the field founded by Stephen Wolfram in 1987.
About the Instructor(s):
Hector Zenil has a PhD in Computer Science from the University of Lille 1 and a PhD in Philosophy and Epistemology from the Pantheon-Sorbonne University of Paris. He co-leads the Algorithmic Dynamics Lab at the Science for Life Laboratory (SciLifeLab), Unit of Computational Medicine, Center for Molecular Medicine at the Karolinska Institute in Stockholm, Sweden. He is also the head of the Algorithmic Nature Group at LABoRES, the Paris-based lab that started the Online Algorithmic Complexity Calculator and the Human Randomness Perception and Generation Project. Previously, he was a Research Associate at the Behavioural and Evolutionary Theory Lab at the Department of Computer Science at the University of Sheffield in the UK before joining the Department of Computer Science, University of Oxford as a faculty member and senior researcher.
Narsis Kiani has a PhD in Mathematics and has been a postdoctoral researcher at Dresden University of Technology and at the University of Heidelberg in Germany. She has been a VINNOVA Marie Curie Fellow and Assistant Professor in Sweden. She co-leads the Algorithmic Dynamics Lab at the Science for Life Laboratory (SciLifeLab), Unit of Computational Medicine, Center for Molecular Medicine at the Karolinska Institute in Stockholm, Sweden. Narsis is also a member of the Algorithmic Nature Group, LABoRES.
Hector and Narsis are the leaders of the Algorithmic Dynamics Lab at the Unit of Computational Medicine at Karolinska Institute.
TA:
Alyssa Adams has a PhD in Physics from Arizona State University and studies what makes living systems different from non-living ones. She currently works at Veda Data Solutions as a data scientist and researcher in social complex systems that are represented by large datasets. She completed an internship at Microsoft Research, Cambridge, UK studying machine learning agents in Minecraft, which is an excellent arena for simple and advanced tasks related to living and social activity. Alyssa is also a member of the Algorithmic Nature Group, LABoRES.
The development of the course and material offered has been supported by:
- The Foundational Questions Institute (FQXi)
- Wolfram Research
- John Templeton Foundation
- Santa Fe Institute
- Swedish Research Council (Vetenskapsrådet)
- Algorithmic Nature Group, LABoRES for the Natural and Digital Sciences
- Living Systems Lab, King Abdullah University of Science and Technology.
- Department of Computer Science, Oxford University
- Cambridge University Press
- London Mathematical Society
- Springer Verlag
- ItBit for the Natural and Computational Sciences and, of course,
- the Algorithmic Dynamics lab, Unit of Computational Medicine, SciLifeLab, Center for Molecular Medicine, The Karolinska Institute
Class Introduction:Class IntroductionHow to use Complexity Explorer:How to use Complexity Explorer
Course dates: 11 Jun 2018 9pm PDT to 03 Sep 2018 10pm PDT
Syllabus
- A Computational Approach to Causality
- A Brief Introduction to Graph Theory and Biological Networks
- Elements of Information Theory and Computability
- Randomness and Algorithmic Complexity
- Dynamical Systems as Models of the World
- Practice, Technical Skills and Selected Topics
- Algorithmic Information Dynamics and Reprogrammability
- Applications to Behavioural, Evolutionary and Molecular Biology
🔖 Fundamentals of NetLogo | Complexity Explorer
About the Tutorial:This tutorial will present you with the basics of how to use NetLogo to create an agent-based modeling. During the tutorial, we will briefly discuss what agent-based modeling is, and then dive in to hands-on work using the NetLogo programming language, which is developed and supported at Northwestern University by Uri Wilensky. No programming background or knowledge is required, and the methods examined will be useable in any number of different fields.
About the Instructor(s):Bill Rand is an assistant professor of Business Management at the Poole College of Management at North Carolina State University and a computer scientist by training. He has co-authored a textbook on agent-based modelingwith Uri Wilensky, the author of the NetLogo programming language. He is also the author of over 50 scholarly papers, many of which use agent-based modeling as their core methodology. He received his doctorate in computer science in 2005 from the University of Michigan, and was also awarded a postdoctoral fellowship to Northwestern University, where he worked directly with Uri Wilensky as part of the NetLogo development team.
Syllabus
- Introduction to ABM
- Tabs, Turtles, Patches, and Links
- Code, Control, and Collections
- Putting It All Together
- Conclusion
WE’RE LAUNCHING A NEW TUTORIAL!
Fundamentals of NetLogo, a primer on the most used agent-based modeling software, will be available tomorrow.
Stay tuned for our launch announcement, and check out all our tutorials at https://t.co/APIkME07y5 pic.twitter.com/M8qIJp1R6x— ComplexityExplorer (@ComplexExplorer) April 2, 2018
SFI and ASU to offer online M.S. in Complexity | Complexity Explorer
SFI and Arizona State University soon will offer the world’s first comprehensive online master’s degree in complexity science. It will be the Institute’s first graduate degree program, a vision that dates to SFI’s founding. “With technology, a growing recognition of the value of online education, widespread acceptance of complexity science, and in partnership with ASU, we are now able to offer the world a degree in the field we helped invent,” says SFI President David Krakauer, “and it will be taught by the very people who built it into a legitimate domain of scholarship.”
Updated: Santa Fe feeling the effects of Trump’s policies
Reports reveal that some invitees will now not travel to events in U.S.
Kenneth Arrow, Nobel-Winning Economist Whose Influence Spanned Decades, Dies at 95 | The New York Times
Professor Arrow, one of the most brilliant minds in his field during the 20th century, became the youngest economist ever to earn a Nobel at the age of 51.
🔖 How Life (and Death) Spring From Disorder | Quanta Magazine
Life was long thought to obey its own set of rules. But as simple systems show signs of lifelike behavior, scientists are arguing about whether this apparent complexity is all a consequence of thermodynamics.
While Ball has a broad area of interests and coverage in his work, he’s certainly one of the best journalists working in this subarea of interests today. I highly recommend his work to those who find this area interesting.
References
Statistical Physics, Information Processing, and Biology Workshop at Santa Fe Institute
The Santa Fe Institute, in New Mexico, is a place for studying complex systems. I’ve never been there! Next week I’ll go there to give a colloquium on network theory, and also to participate in this workshop.
Stastical Physics, Information Processing, and Biology
Workshop
November 16, 2016 – November 18, 2016
9:00 AM
Noyce Conference RoomAbstract.
This workshop will address a fundamental question in theoretical biology: Does the relationship between statistical physics and the need of biological systems to process information underpin some of their deepest features? It recognizes that a core feature of biological systems is that they acquire, store and process information (i.e., perform computation). However to manipulate information in this way they require a steady flux of free energy from their environments. These two, inter-related attributes of biological systems are often taken for granted; they are not part of standard analyses of either the homeostasis or the evolution of biological systems. In this workshop we aim to fill in this major gap in our understanding of biological systems, by gaining deeper insight in the relation between the need for biological systems to process information and the free energy they need to pay for that processing.The goal of this workshop is to address these issues by focusing on a set three specific question:
- How has the fraction of free energy flux on earth that is used by biological computation changed with time?;
- What is the free energy cost of biological computation / function?;
- What is the free energy cost of the evolution of biological computation / function.
In all of these cases we are interested in the fundamental limits that the laws of physics impose on various aspects of living systems as expressed by these three questions.
Purpose: Research Collaboration
SFI Host: David Krakauer, Michael Lachmann, Manfred Laubichler, Peter Stadler, and David Wolpert
Introduction to Information Theory | SFI’s Complexity Explorer
Introduction to Information Theory
About the Tutorial:
This tutorial introduces fundamental concepts in information theory. Information theory has made considerable impact in complex systems, and has in part co-evolved with complexity science. Research areas ranging from ecology and biology to aerospace and information technology have all seen benefits from the growth of information theory.
In this tutorial, students will follow the development of information theory from bits to modern application in computing and communication. Along the way Seth Lloyd introduces valuable topics in information theory such as mutual information, boolean logic, channel capacity, and the natural relationship between information and entropy.
Lloyd coherently covers a substantial amount of material while limiting discussion of the mathematics involved. When formulas or derivations are considered, Lloyd describes the mathematics such that less advanced math students will find the tutorial accessible. Prerequisites for this tutorial are an understanding of logarithms, and at least a year of high-school algebra.
About the Instructor(s):
Professor Seth Lloyd is a principal investigator in the Research Laboratory of Electronics (RLE) at the Massachusetts Institute of Technology (MIT). He received his A.B. from Harvard College in 1982, the Certificate of Advanced Study in Mathematics (Part III) and an M. Phil. in Philosophy of Science from Cambridge University in 1983 and 1984 under a Marshall Fellowship, and a Ph.D. in Physics in 1988 from Rockefeller University under the supervision of Professor Heinz Pagels.
From 1988 to 1991, Professor Lloyd was a postdoctoral fellow in the High Energy Physics Department at the California Institute of Technology, where he worked with Professor Murray Gell-Mann on applications of information to quantum-mechanical systems. From 1991 to 1994, he was a postdoctoral fellow at Los Alamos National Laboratory, where he worked at the Center for Nonlinear Systems on quantum computation. In 1994, he joined the faculty of the Department of Mechanical Engineering at MIT. Since 1988, Professor Lloyd has also been an adjunct faculty member at the Sante Fe Institute.
Professor Lloyd has performed seminal work in the fields of quantum computation and quantum communications, including proposing the first technologically feasible design for a quantum computer, demonstrating the viability of quantum analog computation, proving quantum analogs of Shannon’s noisy channel theorem, and designing novel methods for quantum error correction and noise reduction.
Professor Lloyd is a member of the American Physical Society and the Amercian Society of Mechanical Engineers.
Tutorial Team:
Yoav Kallus is an Omidyar Fellow at the Santa Fe Institute. His research at the boundary of statistical physics and geometry looks at how and when simple interactions lead to the formation of complex order in materials and when preferred local order leads to system-wide disorder. Yoav holds a B.Sc. in physics from Rice University and a Ph.D. in physics from Cornell University. Before joining the Santa Fe Institute, Yoav was a postdoctoral fellow at the Princeton Center for Theoretical Science in Princeton University.
How to use Complexity Explorer: How to use Complexity Explore
Prerequisites: At least one year of high-school algebra
Like this tutorial? Donate to help fund more like it
Syllabus
- Introduction
- Forms of Information
- Information and Probability
- Fundamental Formula of Information
- Computation and Logic: Information Processing
- Mutual Information
- Communication Capacity
- Shannon’s Coding Theorem
- The Manifold Things Information Measures
- Homework