📖 Read pages 191 – 215 of At Home in the Universe by Stuart Kauffman
In chapter 9 Kauffman applies his NK landscape model to explain the evolution seen in the Cambrian explosion and the re-population following the Permian extinction. He then follows it up with some interesting discussion which applies it to technological innovation, learning curves, and growth in areas of economics. The chapter has given me a few thoughts on the shape and structure (or “landscape”) of mathematics. I’ll come back to this section to see if I can’t extend the analogy to come up with something unique in math.
The beginning of Chapter 10 he begins discussing power laws and covering the concept of emergence from ecosystems, coevolution, and the evolution of coevolution. In one part he evokes Adam Smith’s invisible hand which seemingly benefits everyone acting for its own selfishness. Though this seems to be the case since it was written, I do wonder what timescales and conditions it works under. As an example, selfishness on the individual, corporate, nation, and other higher levels may not necessarily be so positive with respect to potential issues like climate change which may drastically affect the landscape on and in which we live.
This book originated from a series of papers which were published in "Die Naturwissenschaften" in 1977178. Its division into three parts is the reflection of a logic structure, which may be abstracted in the form of three theses:
A. Hypercycles are a principle of natural self-organization allowing an integration and coherent evolution of a set of functionally coupled self-replicative entities.
B. Hypercycles are a novel class of nonlinear reaction networks with unique properties, amenable to a unified mathematical treatment.
C. Hypercycles are able to originate in the mutant distribution of a single Darwinian quasi-species through stabilization of its diverging mutant genes. Once nucleated hypercycles evolve to higher complexity by a process analogous to gene duplication and specialization. In order to outline the meaning of the first statement we may refer to another principle of material self organization, namely to Darwin's principle of natural selection. This principle as we see it today represents the only understood means for creating information, be it the blue print for a complex living organism which evolved from less complex ancestral forms, or be it a meaningful sequence of letters the selection of which can be simulated by evolutionary model games.
A mathematical model could lead to a new approach to the study of what is possible, and how it follows from what already exists.
Innovation is one of the driving forces in our world. The constant creation of new ideas and their transformation into technologies and products forms a powerful cornerstone for 21st century society. Indeed, many universities and institutes, along with regions such as Silicon Valley, cultivate this process.
And yet the process of innovation is something of a mystery. A wide range of researchers have studied it, ranging from economists and anthropologists to evolutionary biologists and engineers. Their goal is to understand how innovation happens and the factors that drive it so that they can optimize conditions for future innovation.
This approach has had limited success, however. The rate at which innovations appear and disappear has been carefully measured. It follows a set of well-characterized patterns that scientists observe in many different circumstances. And yet, nobody has been able to explain how this pattern arises or why it governs innovation.
Today, all that changes thanks to the work of Vittorio Loreto at Sapienza University of Rome in Italy and a few pals, who have created the first mathematical model that accurately reproduces the patterns that innovations follow. The work opens the way to a new approach to the study of innovation, of what is possible and how this follows from what already exists.
The notion that innovation arises from the interplay between the actual and the possible was first formalized by the complexity theorist Stuart Kauffmann. In 2002, Kauffmann introduced the idea of the “adjacent possible” as a way of thinking about biological evolution. I know he discusses some of this in At Home in the Universe.
The adjacent possible is all those things—ideas, words, songs, molecules, genomes, technologies and so on—that are one step away from what actually exists. It connects the actual realization of a particular phenomenon and the space of unexplored possibilities.
But this idea is hard to model for an important reason. The space of unexplored possibilities includes all kinds of things that are easily imagined and expected but it also includes things that are entirely unexpected and hard to imagine. And while the former is tricky to model, the latter has appeared close to impossible.
What’s more, each innovation changes the landscape of future possibilities. So at every instant, the space of unexplored possibilities—the adjacent possible—is changing.
“Though the creative power of the adjacent possible is widely appreciated at an anecdotal level, its importance in the scientific literature is, in our opinion, underestimated,” say Loreto and co.
Nevertheless, even with all this complexity, innovation seems to follow predictable and easily measured patterns that have become known as “laws” because of their ubiquity. One of these is Heaps’ law, which states that the number of new things increases at a rate that is sublinear. In other words, it is governed by a power law of the form V(n) = knβ where β is between 0 and 1.
Words are often thought of as a kind of innovation, and language is constantly evolving as new words appear and old words die out.
This evolution follows Heaps’ law. Given a corpus of words of size n, the number of distinct words V(n) is proportional to n raised to the β power. In collections of real words, β turns out to be between 0.4 and 0.6.
Another well-known statistical pattern in innovation is Zipf’s law, which describes how the frequency of an innovation is related to its popularity. For example, in a corpus of words, the most frequent word occurs about twice as often as the second most frequent word, three times as frequently as the third most frequent word, and so on. In English, the most frequent word is “the” which accounts for about 7 percent of all words, followed by “of” which accounts for about 3.5 percent of all words, followed by “and,” and so on.
This frequency distribution is Zipf’s law and it crops up in a wide range of circumstances, such as the way edits appear on Wikipedia, how we listen to new songs online, and so on.
These patterns are empirical laws—we know of them because we can measure them. But just why the patterns take this form is unclear. And while mathematicians can model innovation by simply plugging the observed numbers into equations, they would much rather have a model which produces these numbers from first principles.
Enter Loreto and his pals (one of which is the Cornell University mathematician Steve Strogatz). These guys create a model that explains these patterns for the first time.
They begin with a well-known mathematical sand box called Polya’s Urn. It starts with an urn filled with balls of different colors. A ball is withdrawn at random, inspected and placed back in the urn with a number of other balls of the same color, thereby increasing the likelihood that this color will be selected in future.
This is a model that mathematicians use to explore rich-get-richer effects and the emergence of power laws. So it is a good starting point for a model of innovation. However, it does not naturally produce the sublinear growth that Heaps’ law predicts.
That’s because the Polya urn model allows for all the expected consequences of innovation (of discovering a certain color) but does not account for all the unexpected consequences of how an innovation influences the adjacent possible.
The upshot of the whole thing:
So Loreto, Strogatz, and co have modified Polya’s urn model to account for the possibility that discovering a new color in the urn can trigger entirely unexpected consequences. They call this model “Polya’s urn with innovation triggering.”
The exercise starts with an urn filled with colored balls. A ball is withdrawn at random, examined, and replaced in the urn.
If this color has been seen before, a number of other balls of the same color are also placed in the urn. But if the color is new—it has never been seen before in this exercise—then a number of balls of entirely new colors are added to the urn.
Loreto and co then calculate how the number of new colors picked from the urn, and their frequency distribution, changes over time. The result is that the model reproduces Heaps’ and Zipf’s Laws as they appear in the real world—a mathematical first. “The model of Polya’s urn with innovation triggering, presents for the first time a satisfactory first-principle based way of reproducing empirical observations,” say Loreto and co.
The team has also shown that its model predicts how innovations appear in the real world. The model accurately predicts how edit events occur on Wikipedia pages, the emergence of tags in social annotation systems, the sequence of words in texts, and how humans discover new songs in online music catalogues.
Interestingly, these systems involve two different forms of discovery. On the one hand, there are things that already exist but are new to the individual who finds them, such as online songs; and on the other are things that never existed before and are entirely new to the world, such as edits on Wikipedia.
Loreto and co call the former novelties—they are new to an individual—and the latter innovations—they are new to the world.
Curiously, the same model accounts for both phenomenon. It seems that the pattern behind the way we discover novelties—new songs, books, etc.—is the same as the pattern behind the way innovations emerge from the adjacent possible.
That raises some interesting questions, not least of which is why this should be. But it also opens an entirely new way to think about innovation and the triggering events that lead to new things. “These results provide a starting point for a deeper understanding of the adjacent possible and the different nature of triggering events that are likely to be important in the investigation of biological, linguistic, cultural, and technological evolution,” say Loreto and co.
We’ll look forward to seeing how the study of innovation evolves into the adjacent possible as a result of this work.
How Donald Trump is leveraging an old Vaudeville trick to heavily contest the presidential election
A Problem with Transcripts
In the past few weeks, I’ve seen dozens of news outlets publish multi-paragraph excerpts of speeches from Donald Trump and have been appalled that I was unable to read them in any coherent way. I could not honestly follow or discern any coherent thought or argument in the majority of them. I was a bit shocked because in listening to him, he often sounds like he has some kind of point, though he seems to be spouting variations on one of ten one-liners he’s been using for over a year now. There’s apparently a flaw in our primal reptilian brains that seems to be tricking us into thinking that there’s some sort of substance in his speech when there honestly is none. I’m going to have to spend some time reading more on linguistics and cognitive neuroscience. Maybe Stephen Pinker knows of an answer?
The situation got worse this week as I turned to news sources for fact-checking of the recent presidential debate. While it’s nice to have web-based annotation tools like Genius and Hypothes.is to mark up these debates, it becomes another thing altogether to understand the meaning of what’s being said in order to actually attempt to annotate it. I’ve included some links so that readers can attempt the exercise for themselves.
Recent transcripts (some with highlights/annotations):
It’s been a while since Americans were broadly exposed to actual doubletalk. For the most part our national experience with it has been a passing curiosity highlighted by comedians.
n. (NORTH AMERICAN)
a deliberately unintelligible form of speech in which inappropriate, invented or nonsense syllables are combined with actual words. This type of speech is commonly used to give the appearance of knowledge and thereby confuse, amuse, or entertain the speaker’s audience.
another term for doublespeak
see also n. doubletalk 
Since the days of vaudeville (and likely before), comedians have used doubletalk to great effect on stage, in film, and on television. Some comedians who have historically used the technique as part of their acts include Al Kelly, Cliff Nazarro, Danny Kaye, Gary Owens, Irwin Corey, Jackie Gleason, Sid Caesar, Stanley Unwin, and Reggie Watts. I’m including some short video clips below as examples.
A well-known, but foreshortened, form of it was used by Dana Carvey in his Saturday Night Live performances caricaturizing George H.W. Bush by using a few standard catch phrases with pablum in between: “Not gonna do it…”, “Wouldn’t be prudent at this juncture”, and “Thousand Points of Light…”. These snippets in combination with some creative hand gestures (pointing, lacing fingers together), along with a voice melding of Mr. Rogers and John Wayne were the simple constructs that largely transformed a diminutive comedian convincingly into a president.
Doubletalk also has a more “educated” sibling known as technobabble. Engineers are sure to recall a famous (and still very humorous) example of both doubletalk and technobabble in the famed description of the Turboencabulator. (See also, the short videos below.)
Doubletalk comedy examples
Al Kelly on Ernie Kovaks
Rockwell Turbo Encabulator Version 2
And of course doubletalk and technobabble have closely related cousins named doublespeak and politicobabble. These are far more dangerous than the others because they move over the line of comedy into seriousness and are used by people who make decisions effecting hundreds of thousands to millions, if not billions, of people on the planet. I’m sure an archeo-linguist might be able to discern where exactly politicobabble emerged and managed to evolve into a non-comedic form of speech which people manage to take far more seriously than its close ancestors. One surely suspects some heavy influence from George Orwell’s corpus of work:
While politicobabble is nothing new, I did find a very elucidating passage from the 1992 U.S. Presidential Election cycle which seems to be a major part of the Trump campaign playbook:
In the continuation of the article, Jacobs goes on to give a variety of examples of the term as well as a “translation” guide for some of the common politicobabble words from that particular election. I’ll leave it to the capable hands of others (perhaps in the comments, below?) to come up with the translation guide for our current political climate.
The interesting evolutionary change I’ll note for the current election cycle is that Trump hasn’t delved into any depth on any of his themes to offend anyone significantly enough. This has allowed him to stay with the dozen or so themes he started out using and therefore hasn’t needed to change them as in campaigns of old.
Filling in the Blanks
These forms of pseudo-speech area all meant to fool us into thinking that something of substance is being discussed and that a conversation is happening, when in fact, nothing is really being communicated at all. Most of the intended meaning and reaction to such speech seems to stem from the demeanor of the speaker as well as, in some part, to the reaction of the surrounding interlocutor and audience. In reading Donald Trump transcripts, an entirely different meaning (or lack thereof) is more quickly realized as the surrounding elements which prop up the narrative have been completely stripped away. In a transcript version, gone is the hypnotizing element of the crowd which is vehemently sure that the emperor is truly wearing clothes.
In many of these transcripts, in fact, I find so little is being said that the listener is actually being forced to piece together the larger story in their head. Being forced to fill in the blanks in this way leaves too much of the communication up to the listener who isn’t necessarily engaged at a high level. Without more detail or context to understand what is being communicated, the listener is far more likely to fill in the blanks to fit a story that doesn’t create any cognitive dissonance for themselves — in part because Trump is usually smiling and welcoming towards his adoring audiences.
One will surely recall that Trump even wanted Secretary Clinton to be happy during the debate when he said, “Now, in all fairness to Secretary Clinton — yes, is that OK? Good. I want you to be very happy. It’s very important to me.” (This question also doubles as an example of a standard psychological sales tactic of attempting to get the purchaser to start by saying ‘yes’ as a means to keep them saying yes while moving them towards making a purchase.)
His method of communicating by leaving large holes in his meaning reminds me of the way our brain smooths out information as indicated in this old internet meme:
I cdn’uolt blveiee taht I cluod aulaclty uesdnatnrd waht I was rdanieg: the phaonmneel pweor of the hmuan mnid. Aoccdrnig to a rseearch taem at Cmabrigde Uinervtisy, it deosn’t mttaer in waht oredr the ltteers in a wrod are, the olny iprmoatnt tihng is taht the frist and lsat ltteer be in the rghit pclae. The rset can be a taotl mses and you can sitll raed it wouthit a porbelm. Tihs is bcuseae the huamn mnid deos not raed ervey lteter by istlef, but the wrod as a wlohe. Scuh a cdonition is arpppoiatrely cllaed typoglycemia.
I’m also reminded of the biases and heuristics research carried out in part (and the remainder cited) by Daniel Kahneman in his book Thinking, Fast and Slow in which he discusses the mechanics of how system 1 and system 2 work in our brains. Is Trump taking advantage of the deficits of language processing in our brains in something akin to system 1 biases to win large blocks of votes? Is he creating a virtual real-time Choose-Your-Own-Adventure to subvert the laziness of the electorate? Kahneman would suggest the the combination of what Trump does say and what he doesn’t leaves it up to every individual listener to create their own story. Their system 1 is going to default to the easiest and most palatable one available to them: a happy story that fits their own worldview and is likely to encourage them to support Trump.
Ten Word Answers
As an information theorist, I know all too well that there must be a ‘linguistic Shannon limit’ to the amount of semantic meaning one can compress into a single word.  One is ultimately forced to attempt to form sentences to convey more meaning. But usually the less politicians say, the less trouble they can get into — a lesson hard won through generations of political fighting.
I’m reminded of a scene from The West Wing television series. In season 4, episode 6 which aired on October 30, 2002 on NBC, Game On had a poignant moment (video clip below) which is germane to our subject: 
Moderator: Governor Ritchie, many economists have stated that the tax cut, which is the centrepiece of your economic agenda, could actually harm the economy. Is now really the time to cut taxes? Governor Ritchie, R-FL: You bet it is. We need to cut taxes for one reason – the American people know how to spend their money better than the federal government does. Moderator: Mr. President, your rebuttal. President Bartlet: There it is…
That’s the 10 word answer my staff’s been looking for for 2 weeks. There it is.
10 word answers can kill you in political campaigns — they’re the tip of the sword.
Here’s my question: What are the next 10 words of your answer?
“Your taxes are too high?” So are mine…
Give me the next 10 words: How are we going to do it?
Give me 10 after that — I’ll drop out of the race right now.
Every once in a while — every once in a while, there’s a day with an absolute right and an absolute wrong, but those days almost always include body counts. Other than that there aren’t very many un-nuanced moments in leading a country that’s way too big for 10 words.
I’m the President of the United States, not the president of the people who agree with me. And by the way, if the left has a problem with that, they should vote for somebody else.
As someone who studies information theory and complexity theory and even delves into sub-topics like complexity and economics, I can agree wholeheartedly with the sentiment. Though again, here I can also see the massive gaps between system 1 and 2 that force us to want to simplify things down to such a base level that we don’t have to do the work to puzzle them out.
(And yes, that is Jennifer Anniston’s father playing the moderator.)
One can’t but wonder why Mr. Trump doesn’t seem to have ever gone past the first ten words? Is it because he isn’t capable? interested? Or does he instinctively know better? It would seem that he’s been doing business by using the uncertainty inherent in his speech for decades, but always operating by using what he meant (or thought he wanted to mean) than what the other party heard and thought they understood. If it ain’t broke, don’t fix it.
Idiocracy or Something Worse?
In our increasingly specialized world, people eventually have to give in and quit doing some tasks that everyone used to do for themselves. Yesterday I saw a lifeworn woman in her 70s pushing a wheeled wire basket with a 5 gallon container of water from the store to her home. As she shuffled along, I contemplated Thracian people from fourth century BCE doing the same thing except they likely carried amphorae possibly with a yoke and without the benefit of the $10 manufactured custom shopping cart. 20,000 years before that people were still carrying their own water, but possibly without even the benefit of earthenware containers. Things in human history have changed very slowly for the most part, but as we continually sub-specialize further and further, we need to remember that we can’t give up one of the primary functions that makes us human: the ability to think deeply and analytically for ourselves.
I suspect that far too many people are too wrapped up in their own lives and problems to listen to more than the ten word answers our politicians are advertising to us. We need to remember to ask for the next ten words and the ten after that.
Otherwise there are two extreme possible outcomes:
We’re either at the beginning of what Mike Judge would term Idiocracy. 
Here, one is tempted to quote George Santayana’s famous line (from The Life of Reason, 1905), “Those who cannot remember the past are condemned to repeat it.” However, I far prefer the following as more apropos to our present national situation:
If Cliff Navarro comes back to run for president, I hope no one falls for his joke just because he wasn’t laughing as he acted it out. If his instructions for fixing the wagon (America) are any indication, the voters who are listening and making the repairs will be in severe pain.
I ran across a link to this textbook by way of a standing Google alert, and was excited to check it out. I was immediately disappointed to think that I would have to wait another month and change for the physical textbook to be released, but made my pre-order directly. Then with a bit of digging around, I realized that individual chapters are available immediately to quench my thirst until the physical text is printed next month.
During decades the study of networks has been divided between the efforts of social scientists and natural scientists, two groups of scholars who often do not see eye to eye. In this review I present an effort to mutually translate the work conducted by scholars from both of these academic fronts hoping to continue to unify what has become a diverging body of literature. I argue that social and natural scientists fail to see eye to eye because they have diverging academic goals. Social scientists focus on explaining how context specific social and economic mechanisms drive the structure of networks and on how networks shape social and economic outcomes. By contrast, natural scientists focus primarily on modeling network characteristics that are independent of context, since their focus is to identify universal characteristics of systems instead of context specific mechanisms. In the following pages I discuss the differences between both of these literatures by summarizing the parallel theories advanced to explain link formation and the applications used by scholars in each field to justify their approach to network science. I conclude by providing an outlook on how these literatures can be further unified.
UNAM Mexico City has an available free download of Carlos Gershenson’s 2007 text.
Complex systems are usually difficult to design and control. There are several particular methods for coping with complexity, but there is no general approach to build complex systems. In this book I propose a methodology to aid engineers in the design and control of complex systems. This is based on the description of systems as self-organizing. Starting from the agent metaphor, the methodology proposes a conceptual framework and a series of steps to follow to find proper mechanisms that will promote elements to find solutions by actively interacting among themselves.
On the Origins of Life, Meaning, and the Universe Itself
I’m already a major chunk of the way through the book, having had an early ebook version of the text prior to publication. This is the published first edition with all the diagrams which I wanted to have prior to finishing my full review, which is forthcoming.
One thing I will mention is that it’s got quite a bit more philosophy in it than most popular science books with such a physics bent. Those who aren’t already up to speed on the math and science of modern physics can certainly benefit from the book (like most popular science books of its stripe, it doesn’t have any equations — hairy or otherwise), and it’s certain to help many toward becoming members of both of C.P. Snow’s two cultures. It might not be the best place for mathematicians and physicists to start moving toward the humanities with the included philosophy as the philosophy is very light and spotty in places and the explanations of the portions they’re already aware of may put them out a bit.
I’m most interested to see how he views complexity and thinking in the final portion of the text.
The Santa Fe Institute's free online course "Introduction to Information Theory" taught by Seth Lloyd via Complexity Explorer.
Many readers often ask me for resources for delving into the basics of information theory. I hadn’t posted it before, but the Santa Fe Institute recently had an online course Introduction to Information Theory through their Complexity Explorer, which has some other excellent offerings. It included videos, fora, and other resources and was taught by the esteemed physicist and professor Seth Lloyd. There are a number of currently active students still learning and posting there.
Introduction to Information Theory
About the Tutorial:
This tutorial introduces fundamental concepts in information theory. Information theory has made considerable impact in complex systems, and has in part co-evolved with complexity science. Research areas ranging from ecology and biology to aerospace and information technology have all seen benefits from the growth of information theory.
In this tutorial, students will follow the development of information theory from bits to modern application in computing and communication. Along the way Seth Lloyd introduces valuable topics in information theory such as mutual information, boolean logic, channel capacity, and the natural relationship between information and entropy.
Lloyd coherently covers a substantial amount of material while limiting discussion of the mathematics involved. When formulas or derivations are considered, Lloyd describes the mathematics such that less advanced math students will find the tutorial accessible. Prerequisites for this tutorial are an understanding of logarithms, and at least a year of high-school algebra.
About the Instructor(s):
Professor Seth Lloyd is a principal investigator in the Research Laboratory of Electronics (RLE) at the Massachusetts Institute of Technology (MIT). He received his A.B. from Harvard College in 1982, the Certificate of Advanced Study in Mathematics (Part III) and an M. Phil. in Philosophy of Science from Cambridge University in 1983 and 1984 under a Marshall Fellowship, and a Ph.D. in Physics in 1988 from Rockefeller University under the supervision of Professor Heinz Pagels.
From 1988 to 1991, Professor Lloyd was a postdoctoral fellow in the High Energy Physics Department at the California Institute of Technology, where he worked with Professor Murray Gell-Mann on applications of information to quantum-mechanical systems. From 1991 to 1994, he was a postdoctoral fellow at Los Alamos National Laboratory, where he worked at the Center for Nonlinear Systems on quantum computation. In 1994, he joined the faculty of the Department of Mechanical Engineering at MIT. Since 1988, Professor Lloyd has also been an adjunct faculty member at the Sante Fe Institute.
Professor Lloyd has performed seminal work in the fields of quantum computation and quantum communications, including proposing the first technologically feasible design for a quantum computer, demonstrating the viability of quantum analog computation, proving quantum analogs of Shannon’s noisy channel theorem, and designing novel methods for quantum error correction and noise reduction.
Professor Lloyd is a member of the American Physical Society and the Amercian Society of Mechanical Engineers.
Yoav Kallus is an Omidyar Fellow at the Santa Fe Institute. His research at the boundary of statistical physics and geometry looks at how and when simple interactions lead to the formation of complex order in materials and when preferred local order leads to system-wide disorder. Yoav holds a B.Sc. in physics from Rice University and a Ph.D. in physics from Cornell University. Before joining the Santa Fe Institute, Yoav was a postdoctoral fellow at the Princeton Center for Theoretical Science in Princeton University.
Recent research on global language networks has interesting relations to big history, complexity economics, and current politics.
Yesterday I ran across this nice little video explaining some recent research on global language networks. It’s not only interesting in its own right, but is a fantastic example of science communication as well.
I’m interested in some of the information theoretic aspects of this as well as the relation of this to the area of corpus linguistics. I’m also curious if one could build worthwhile datasets like this for the ancient world (cross reference some of the sources I touch on in relation to the Dickinson College Commentaries within Latin Pedagogy and the Digital Humanities) to see what influences different language cultures have had on each other. Perhaps the historical record could help to validate some of the predictions made in relation to the future?
The paper “Global distribution and drivers of language extinction risk” indicates that of all the variables tested, economic growth was most strongly linked to language loss.
Finally, I can also only think about how this research may help to temper some of the xenophobic discussion that occurs in American political life with respect to fears relating to Mexican immigration issues as well as the position of China in the world economy.
Those intrigued by the video may find the website set up by the researchers very interesting. It contains links to the full paper as well as visualizations and links to the data used.
Languages vary enormously in global importance because of historical, demographic, political, and technological forces. However, beyond simple measures of population and economic power, there has been no rigorous quantitative way to define the global influence of languages. Here we use the structure of the networks connecting multilingual speakers and translated texts, as expressed in book translations, multiple language editions of Wikipedia, and Twitter, to provide a concept of language importance that goes beyond simple economic or demographic measures. We find that the structure of these three global language networks (GLNs) is centered on English as a global hub and around a handful of intermediate hub languages, which include Spanish, German, French, Russian, Portuguese, and Chinese. We validate the measure of a language’s centrality in the three GLNs by showing that it exhibits a strong correlation with two independent measures of the number of famous people born in the countries associated with that language. These results suggest that the position of a language in the GLN contributes to the visibility of its speakers and the global popularity of the cultural content they produce.
“A language like Dutch — spoken by 27 million people — can be a disproportionately large conduit, compared with a language like Arabic, which has a whopping 530 million native and second-language speakers,” Science reports. “This is because the Dutch are very multilingual and very online.”
Information is a precise concept that can be defined mathematically, but its relationship to what we call "knowledge" is not always made clear. Furthermore, the concepts "entropy" and "information", while deeply related, are distinct and must be used with care, something that is not always achieved in the literature. In this elementary introduction, the concepts of entropy and information are laid out one by one, explained intuitively, but defined rigorously. I argue that a proper understanding of information in terms of prediction is key to a number of disciplines beyond engineering, such as physics and biology.
A proper understanding of information in terms of prediction is key to a number of disciplines beyond engineering, such as physics and biology.
Comments: 19 pages, 2 figures. To appear in Philosophical Transaction of the Royal Society A
Subjects: Adaptation and Self-Organizing Systems (nlin.AO); Information Theory (cs.IT); Biological Physics (physics.bio-ph); Quantitative Methods (q-bio.QM)
Cite as:arXiv:1601.06176 [nlin.AO] (or arXiv:1601.06176v1 [nlin.AO] for this version)
In the 1870s Ewald Hering in Prague and Samuel Butler in London laid the foundations. Butler's work was later taken up by Richard Semon in Munich, whose writings inspired the young Erwin Schrodinger in the early decades of the 20th century.
As it was published, I had read Kevin Hartnett’s article and interview with Christoph Adami The Information Theory of Life in Quanta Magazine. I recently revisited it and read through the commentary and stumbled upon an interesting quote relating to the history of information in biology:
For those interested in reading more on this historical tidbit, I’ve dug up a copy of the primary Forsdyke reference which first appeared on arXiv (prior to its ultimate publication in History of Psychiatry [.pdf]):
Abstract: Today’s ‘theory of mind’ (ToM) concept is rooted in the distinction of nineteenth century philosopher William Clifford between ‘objects’ that can be directly perceived, and ‘ejects,’ such as the mind of another person, which are inferred from one’s subjective knowledge of one’s own mind. A founder, with Charles Darwin, of the discipline of comparative psychology, George Romanes considered the minds of animals as ejects, an idea that could be generalized to ‘society as eject’ and, ultimately, ‘the world as an eject’ – mind in the universe. Yet, Romanes and Clifford only vaguely connected mind with the abstraction we call ‘information,’ which needs ‘a vehicle of symbols’ – a material transporting medium. However, Samuel Butler was able to address, in informational terms depleted of theological trappings, both organic evolution and mind in the universe. This view harmonizes with insights arising from modern DNA research, the relative immortality of ‘selfish’ genes, and some startling recent developments in brain research.
Comments: Accepted for publication in History of Psychiatry. 31 pages including 3 footnotes. Based on a lecture given at Santa Clara University, February 28th 2014, at a Bannan Institute Symposium on ‘Science and Seeking: Rethinking the God Question in the Lab, Cosmos, and Classroom.’
The original arXiv article also referenced two lectures which are appended below:
[Original Draft of this was written on December 14, 2015.]