Checkin UCLA Store (Ackerman Union)

This looks like fun: Signals and Boundaries: Building Blocks for Complex Adaptive Systems by John H. Holland

Ackerman Union, Los Angeles, California, United States of America

Syndicated copies to:

Buzzfeed implements the IndieWeb concept of backfeed to limit filter bubbles

The evolution of comments on articles takes a new journalistic turn

Outside Your Bubble

This past Wednesday, BuzzFeed rolled out a new feature on their website called “Outside your Bubble”. I think the concept is so well-described and so laudable from a journalistic perspective, that I’ll excerpt their editor-in-chief’s entire description of the feature below. In short, they’ll be featuring some of the commentary on their pieces by pulling it in from social media silos.

What is interesting is that this isn’t a new concept and even more intriguing, there’s some great off-the-shelf technology that helps people move towards doing this type of functionality already.

The IndieWeb and backfeed

For the past several years, there’s been a growing movement on the the internet known as the IndieWeb, a “people-focused alternative to the ‘corporate web’.” Their primary goal is for people to better control their online identities by owning their own domain and the content they put on it while also allowing them to be better connected.

As part of the movement, users can more easily post their content on their own site and syndicate it elsewhere (a process known by the acronym POSSE). Many of these social media sites allow for increased distribution, but they also have the side effect of cordoning off or siloing the conversation. As a result many IndieWeb proponents backfeed the comments, likes, and other interactions on their syndicated content back to their original post.

Backfeed is the process of pulling back interactions on your syndicated content back (AKA reverse syndicating) to your original posts.

This concept of backfeed is exactly what BuzzFeed is proposing, but with a more editorial slant meant to provide additional thought and analysis on their original piece. In some sense, from a journalistic perspective, it also seems like an evolutionary step towards making traditional comments have more value to the casual reader. Instead of a simple chronological list of comments which may or may not have any value, they’re also using the feature to surface the more valuable comments which appear on their pieces. In a crowded journalistic marketplace, which is often misguided by market metrics like numbers of clicks, I have a feeling that more discerning readers will want this type of surfaced value if it’s done well. And discerning readers can bring their own value to a content publisher.

I find it interesting that not only is BuzzFeed using the concept of backfeed like this, but in Ben Smith’s piece, he eschews the typical verbiage ascribed to social media sites, namely the common phrase “walled garden,” in lieu of the word silo, which is also the word adopted by the IndieWeb movement to describe a “centralized web site typically owned by a for-profit corporation that stakes some claim to content contributed to it and restricts access in some way (has walls).”

To some extent, it almost appears that the BuzzFeed piece parrots back portions of the Why IndieWeb? page on the IndieWeb wiki.

Helping You See Outside Your Bubble | BuzzFeed

A new feature on some of our most widely shared articles.

BuzzFeed News is launching an experiment this week called “Outside Your Bubble,” an attempt to give our audience a glimpse at what’s happening outside their own social media spaces.

The Outside Your Bubble feature will appear as a module at the bottom of some widely shared news articles and will pull in what people are saying about the piece on Twitter, Facebook, Reddit, the web, and other platforms. It’s a response to the reality that often the same story will have two or three distinct and siloed conversations taking place around it on social media, where people talk to the like-minded without even being aware of other perspectives on the same reporting.

Our goal is to give readers a sense of these conversations around an article, and to add a kind of transparency that has been lost in the rise of social-media-driven filter bubbles. We view it in part as a way to amplify the work of BuzzFeed News reporters, and to add for readers a sense of the context in which news lives now.

And if you think there’s a relevant viewpoint we’re missing, you can contact the curator at bubble@buzzfeed.com.

Source: Helping You See Outside Your Bubble | Ben Smith for BuzzFeed

Editorial Perspective and Diminishing Returns

The big caveat on this type of journalistic functionality is that it may become a game of diminishing returns. When a new story comes out, most of the current ecosystem is geared too heavily towards freshness: which story is newest? It would be far richer if there were better canonical ways of indicating which articles were the most thorough, accurate, timely and interesting instead of just focusing on which was simply the most recent. Google News, as an example, might feature a breaking story for several hours, but thereafter every Tom, Dick, and Harry outlet on the planet will have their version of the story–often just a poorer quality rehash of the original without any new content–which somehow becomes the top of the heap because it’s the newest in the batch. Aram Zucker-Scharff mentioned this type of issue a few days ago in a tweetstorm which I touched upon last week.

Worse, for the feature to work well, it relies on the continuing compilation of responses, and the editorial effort required seems somewhat wasted in doing this as, over time, the audience for the article slowly diminishes. Thus for the largest portion of the audience there will be no commentary, all the while ever-dwindling incoming audiences get to see the richer content. This is just the opposite of the aphorism “the early bird gets the worm.” Even if the outlet compiled responses on a story from social media as they were writing in real time, it becomes a huge effort to stay current and capture eyeballs at scale. Hopefully the two effects will balance each other out creating an overall increase of value for both the publisher and the audience to have a more profound effect on the overall journalism ecosystem.

Personally and from a user experience perspective, I’d like to have the ability to subscribe to an article I read and enjoyed so that I can come back to it at a prescribed later date to see what the further thoughts on it were. As things stand, it’s painfully difficult and time consuming as a reader to attempt to engage on interesting pieces at a deeper level. Publications that can do this type of coverage and/or provide further analysis on ongoing topics will also have a potential edge over me-too publications that are simply rehashing the same exact stories on a regular basis. Outlets could also leverage this type user interface and other readers’ similar desire to increase their relationship with their readers by providing this value that others won’t or can’t.

Want more on “The IndieWeb and Journalism”?
See: Some thoughts about how journalists could improve their online presences with IndieWeb principles along with a mini-case study of a site that is employing some of these ideas.

In some sense, some of this journalistic workflow reminds me how much I miss Slate.com’s Today’s Papers feature in which someone read through the early edition copies of 4-5 major newspapers and did a quick synopsis of the day’s headlines and then analyzed the coverage of each to show how the stories differed, who got the real scoop, and at times declare a “winner” in coverage so that readers could then focus on reading that particular piece from the particular outlet.

Backfeed in action

What do you think about this idea? Will it change journalism and how readers consume it?

As always, you can feel free to comment on this story directly below, but you can also go to most of the syndicated versions of this post indicated below, and reply to or comment on them there. Your responses via Twitter, Facebook, and Google+ will be backfed via Brid.gy to this post and appear as comments below, so the entire audience will be able to see the otherwise dis-aggregated conversation compiled into one place.

If you prefer to own the content of your own comment or are worried your voice could be “moderated out of existence” (an experience I’ve felt the sting of in the past), feel free to post your response on your own website or blog, include a permalink to this article in your response, put the URL of your commentary into the box labeled “URL/Permalink of your Article”, and then click the “Ping Me” button. My site will then grab your response and add it to the comment stream with all the others.

Backfeed on!

H/T to Ryan Barrett for pointing out the BuzzFeed article.

Syndicated copies to:

🔖 Linking Economic Complexity, Institutions and Income Inequality

Linking Economic Complexity, Institutions and Income Inequality by Dominik Hartmann, Miguel R. Guevara, Cristian Jara-Figueroa, Manuel Aristarán, César A. Hidalgo (arxiv.org)
A country's mix of products predicts its subsequent pattern of diversification and economic growth. But does this product mix also predict income inequality? Here we combine methods from econometrics, network science, and economic complexity to show that countries exporting complex products (as measured by the Economic Complexity Index) have lower levels of income inequality than countries exporting simpler products. Using multivariate regression analysis, we show that economic complexity is a significant and negative predictor of income inequality and that this relationship is robust to controlling for aggregate measures of income, institutions, export concentration, and human capital. Moreover, we introduce a measure that associates a product to a level of income inequality equal to the average GINI of the countries exporting that product (weighted by the share the product represents in that country's export basket). We use this measure together with the network of related products (or product space) to illustrate how the development of new products is associated with changes in income inequality. These findings show that economic complexity captures information about an economy's level of development that is relevant to the ways an economy generates and distributes its income. Moreover, these findings suggest that a country's productive structure may limit its range of income inequality. Finally, we make our results available through an online resource that allows for its users to visualize the structural transformation of over 150 countries and their associated changes in income inequality between 1963 and 2008.

MIT has a pretty good lay-person’s overview of this article. The final published version is separately available.

 

Syndicated copies to:

🔖 Post filtering fixes at Homebrew Website Club | Colin Devroe

Post filtering fixes at Homebrew Website Club by Colin Devroe (cdevroe.com)
Last night Tucker Hottes, Den Temple and I held the first Homebrew Website Club at The Keys in Scranton, PA. I really appreciate that HWC will force me to set aside some time to work on my personal site since it is often neglected for more pressing projects.

Nota bene: Colin is dogfooding his IndieWeb friendly WordPress theme on Github! It’s a beautiful, simple, and very clean theme for a personal website/blog.

Colin, do you mind if we provide a link to your theme on https://indieweb.org/WordPress/Themes for others to potentially use and/or improve upon? (See also discussion at https://indieweb.org/WordPress/Development#Themes.)

Syndicated copies to:

🔖 Want to read: From Bacteria to Bach and Back: The Evolution of Minds by Daniel C. Dennett

From Bacteria to Bach and Back: The Evolution of Minds by Daniel C. Dennett (W. W. Norton & Company; 1 edition, 496 pages (February 7, 2017))
One of America’s foremost philosophers offers a major new account of the origins of the conscious mind.

How did we come to have minds?

For centuries, this question has intrigued psychologists, physicists, poets, and philosophers, who have wondered how the human mind developed its unrivaled ability to create, imagine, and explain. Disciples of Darwin have long aspired to explain how consciousness, language, and culture could have appeared through natural selection, blazing promising trails that tend, however, to end in confusion and controversy. Even though our understanding of the inner workings of proteins, neurons, and DNA is deeper than ever before, the matter of how our minds came to be has largely remained a mystery.

That is now changing, says Daniel C. Dennett. In From Bacteria to Bach and Back, his most comprehensive exploration of evolutionary thinking yet, he builds on ideas from computer science and biology to show how a comprehending mind could in fact have arisen from a mindless process of natural selection. Part philosophical whodunit, part bold scientific conjecture, this landmark work enlarges themes that have sustained Dennett’s legendary career at the forefront of philosophical thought.

In his inimitable style―laced with wit and arresting thought experiments―Dennett explains that a crucial shift occurred when humans developed the ability to share memes, or ways of doing things not based in genetic instinct. Language, itself composed of memes, turbocharged this interplay. Competition among memes―a form of natural selection―produced thinking tools so well-designed that they gave us the power to design our own memes. The result, a mind that not only perceives and controls but can create and comprehend, was thus largely shaped by the process of cultural evolution.

An agenda-setting book for a new generation of philosophers, scientists, and thinkers, From Bacteria to Bach and Back will delight and entertain anyone eager to make sense of how the mind works and how it came about.

4 color, 18 black-and-white illustrations

🔖 Want to read: From Bacteria to Bach and Back: The Evolution of Minds by Daniel C. Dennett

 

Syndicated copies to:

IPAM Workshop on Regulatory and Epigenetic Stochasticity in Development and Disease, March 1-3

IPAM Workshop on Regulatory and Epigenetic Stochasticity in Development and Disease (Institute for Pure and Applied Mathematics, UCLA | March 1-3, 2017)
Epigenetics refers to information transmitted during cell division other than the DNA sequence per se, and it is the language that distinguishes stem cells from somatic cells, one organ from another, and even identical twins from each other. In contrast to the DNA sequence, the epigenome is relatively susceptible to modification by the environment as well as stochastic perturbations over time, adding to phenotypic diversity in the population. Despite its strong ties to the environment, epigenetics has never been well reconciled to evolutionary thinking, and in fact there is now strong evidence against the transmission of so-called “epi-alleles,” i.e. epigenetic modifications that pass through the germline.

However, genetic variants that regulate stochastic fluctuation of gene expression and phenotypes in the offspring appear to be transmitted as an epigenetic or even Lamarckian trait. Furthermore, even the normal process of cellular differentiation from a single cell to a complex organism is not understood well from a mathematical point of view. There is increasingly strong evidence that stem cells are highly heterogeneous and in fact stochasticity is necessary for pluripotency. This process appears to be tightly regulated through the epigenome in development. Moreover, in these biological contexts, “stochasticity” is hardly synonymous with “noise”, which often refers to variation which obscures a “true signal” (e.g., measurement error) or which is structural, as in physics (e.g., quantum noise). In contrast, “stochastic regulation” refers to purposeful, programmed variation; the fluctuations are random but there is no true signal to mask.

This workshop will serve as a forum for scientists and engineers with an interest in computational biology to explore the role of stochasticity in regulation, development and evolution, and its epigenetic basis. Just as thinking about stochasticity was transformative in physics and in some areas of biology, it promises to fundamentally transform modern genetics and help to explain phase transitions such as differentiation and cancer.

This workshop will include a poster session; a request for poster titles will be sent to registered participants in advance of the workshop.

Speaker List:
Adam Arkin (Lawrence Berkeley Laboratory)
Gábor Balázsi (SUNY Stony Brook)
Domitilla Del Vecchio (Massachusetts Institute of Technology)
Michael Elowitz (California Institute of Technology)
Andrew Feinberg (Johns Hopkins University)
Don Geman (Johns Hopkins University)
Anita Göndör (Karolinska Institutet)
John Goutsias (Johns Hopkins University)
Garrett Jenkinson (Johns Hopkins University)
Andre Levchenko (Yale University)
Olgica Milenkovic (University of Illinois)
Johan Paulsson (Harvard University)
Leor Weinberger (University of California, San Francisco (UCSF))

Syndicated copies to:

🔖 IPAM Workshop on Gauge Theory and Categorification, March 6-10

IPAM Workshop on Gauge Theory and Categorification (Institute of Pure and Applied Mathematics at UCLA - March 6-10, 2017)
The equations of gauge theory lie at the heart of our understanding of particle physics. The Standard Model, which describes the electromagnetic, weak, and strong forces, is based on the Yang-Mills equations. Starting with the work of Donaldson in the 1980s, gauge theory has also been successfully applied in other areas of pure mathematics, such as low dimensional topology, symplectic geometry, and algebraic geometry.

More recently, Witten proposed a gauge-theoretic interpretation of Khovanov homology, a knot invariant whose origins lie in representation theory. Khovanov homology is a “categorification” of the celebrated Jones polynomial, in the sense that its Euler characteristic recovers this polynomial. At the moment, Khovanov homology is only defined for knots in the three-sphere, but Witten’s proposal holds the promise of generalizations to other three-manifolds, and perhaps of producing new invariants of four-manifolds.

This workshop will bring together researchers from several different fields (theoretical physics, mathematical gauge theory, topology, analysis / PDE, representation theory, symplectic geometry, and algebraic geometry), and thus help facilitate connections between these areas. The common focus will be to understand Khovanov homology and related invariants through the lens of gauge theory.

This workshop will include a poster session; a request for posters will be sent to registered participants in advance of the workshop.

Edward Witten will be giving two public lectures as part of the Green Family Lecture series:

March 6, 2017
From Gauge Theory to Khovanov Homology Via Floer Theory
The goal of the lecture is to describe a gauge theory approach to Khovanov homology of knots, in particular, to motivate the relevant gauge theory equations in a way that does not require too much physics background. I will give a gauge theory perspective on the construction of singly-graded Khovanov homology by Abouzaid and Smith.

March 8, 2017
An Introduction to the SYK Model
The Sachdev-Ye model was originally a model of quantum spin liquids that was introduced in the mid-1990′s. In recent years, it has been reinterpreted by Kitaev as a model of quantum chaos and black holes. This lecture will be primarily a gentle introduction to the SYK model, though I will also describe a few more recent results.

Syndicated copies to:

Entropy | Special Issue: Maximum Entropy and Bayesian Methods

Entropy | Special Issue : Maximum Entropy and Bayesian Methods (mdpi.com)
Open for submission now
Deadline for manuscript submissions: 31 August 2017
A special issue of Entropy (ISSN 1099-4300).

Deadline for manuscript submissions: 31 August 2017

Special Issue Editor


Guest Editor

Dr. Brendon J. Brewer

 

Department of Statistics, The University of Auckland, Private Bag 92019, Auckland 1142, New Zealand
Website | E-MailPhone: +64275001336
Interests: bayesian inference, markov chain monte carlo, nested sampling, MaxEnt

Special Issue Information

Dear Colleagues,

Whereas Bayesian inference has now achieved mainstream acceptance and is widely used throughout the sciences, associated ideas such as the principle of maximum entropy (implicit in the work of Gibbs, and developed further by Ed Jaynes and others) have not. There are strong arguments that the principle (and variations, such as maximum relative entropy) is of fundamental importance, but the literature also contains many misguided attempts at applying it, leading to much confusion.

This Special Issue will focus on Bayesian inference and MaxEnt. Some open questions that spring to mind are: Which proposed ways of using entropy (and its maximisation) in inference are legitimate, which are not, and why? Where can we obtain constraints on probability assignments, the input needed by the MaxEnt procedure?

More generally, papers exploring any interesting connections between probabilistic inference and information theory will be considered. Papers presenting high quality applications, or discussing computational methods in these areas, are also welcome.

Dr. Brendon J. Brewer
Guest Editor

Submission

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. Papers will be published continuously (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are refereed through a peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed Open Access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1500 CHF (Swiss Francs).

No papers have been published in this special issue yet.

Source: Entropy | Special Issue : Maximum Entropy and Bayesian Methods

🔖 The Hypercycle: A Principle of Natural Self-Organization | Springer

The Hypercycle - A Principle of Natural Self-Organization | M. Eigen | Springer by Manfred Eigen and Peter Schuster (Springer, 1979)
This book originated from a series of papers which were published in "Die Naturwissenschaften" in 1977178. Its division into three parts is the reflection of a logic structure, which may be abstracted in the form of three theses:

A. Hypercycles are a principle of natural self-organization allowing an inte­gration and coherent evolution of a set of functionally coupled self-rep­licative entities.

B. Hypercycles are a novel class of nonlinear reaction networks with unique properties, amenable to a unified mathematical treatment.

C. Hypercycles are able to originate in the mutant distribution of a single Darwinian quasi-species through stabilization of its diverging mutant genes. Once nucleated hypercycles evolve to higher complexity by a process analogous to gene duplication and specialization. In order to outline the meaning of the first statement we may refer to another principle of material self organization, namely to Darwin's principle of natural selection. This principle as we see it today represents the only understood means for creating information, be it the blue print for a complex living organism which evolved from less complex ancestral forms, or be it a meaningful sequence of letters the selection of which can be simulated by evolutionary model games.

Part A in .pdf format.

Syndicated copies to:

🔖 Cognition and biology: perspectives from information theory

Cognition and biology: perspectives from information theory by Roderick Wallace (ncbi.nlm.nih.gov)
The intimate relation between biology and cognition can be formally examined through statistical models constrained by the asymptotic limit theorems of communication theory, augmented by methods from statistical mechanics and nonequilibrium thermodynamics. Cognition, often involving submodules that act as information sources, is ubiquitous across the living state. Less metabolic free energy is consumed by permitting crosstalk between biological information sources than by isolating them, leading to evolutionary exaptations that assemble shifting, tunable cognitive arrays at multiple scales, and levels of organization to meet dynamic patterns of threat and opportunity. Cognition is thus necessary for life, but it is not sufficient: An organism represents a highly patterned outcome of path-dependent, blind, variation, selection, interaction, and chance extinction in the context of an adequate flow of free energy and an environment fit for development. Complex, interacting cognitive processes within an organism both record and instantiate those evolutionary and developmental trajectories.
Syndicated copies to:

🔖 Statistical Physics of Adaptation

Statistical Physics of Adaptation by Nikolay Perunov, Robert A. Marsland, and Jeremy L. England (journals.aps.org Phys. Rev. X 6, 021036 (2016))
Whether by virtue of being prepared in a slowly relaxing, high-free energy initial condition, or because they are constantly dissipating energy absorbed from a strong external drive, many systems subject to thermal fluctuations are not expected to behave in the way they would at thermal equilibrium. Rather, the probability of finding such a system in a given microscopic arrangement may deviate strongly from the Boltzmann distribution, raising the question of whether thermodynamics still has anything to tell us about which arrangements are the most likely to be observed. In this work, we build on past results governing nonequilibrium thermodynamics and define a generalized Helmholtz free energy that exactly delineates the various factors that quantitatively contribute to the relative probabilities of different outcomes in far-from-equilibrium stochastic dynamics. By applying this expression to the analysis of two examples—namely, a particle hopping in an oscillating energy landscape and a population composed of two types of exponentially growing self-replicators—we illustrate a simple relationship between outcome-likelihood and dissipative history. In closing, we discuss the possible relevance of such a thermodynamic principle for our understanding of self-organization in complex systems, paying particular attention to a possible analogy to the way evolutionary adaptations emerge in living things.
Syndicated copies to:

🔖 Meaning = Information + Evolution by Carlo Rovelli

Meaning = Information + Evolution by Carlo Rovelli (arxiv.org)
Notions like meaning, signal, intentionality, are difficult to relate to a physical word. I study a purely physical definition of "meaningful information", from which these notions can be derived. It is inspired by a model recently illustrated by Kolchinsky and Wolpert, and improves on Dretske classic work on the relation between knowledge and information. I discuss what makes a physical process into a "signal".
Syndicated copies to:

🔖 Irreversibility and Heat Generation in the Computing Process by R. Landauer

Irreversibility and Heat Generation in the Computing Process by R. Landauer (ieeexplore.ieee.org)
It is argued that computing machines inevitably involve devices which perform logical functions that do not have a single-valued inverse. This logical irreversibility is associated with physical irreversibility and requires a minimal heat generation, per machine cycle, typically of the order of kT for each irreversible function. This dissipation serves the purpose of standardizing signals and making them independent of their exact logical history. Two simple, but representative, models of bistable devices are subjected to a more detailed analysis of switching kinetics to yield the relationship between speed and energy dissipation, and to estimate the effects of errors induced by thermal fluctuations.

A classical paper in the history of entropy.

Syndicated copies to:

🔖 Why Boltzmann Brains Are Bad by Sean M. Carroll

Why Boltzmann Brains Are Bad by Sean M. Carroll (arxiv.org)
Some modern cosmological models predict the appearance of Boltzmann Brains: observers who randomly fluctuate out of a thermal bath rather than naturally evolving from a low-entropy Big Bang. A theory in which most observers are of the Boltzmann Brain type is generally thought to be unacceptable, although opinions differ. I argue that such theories are indeed unacceptable: the real problem is with fluctuations into observers who are locally identical to ordinary observers, and their existence cannot be swept under the rug by a choice of probability distributions over observers. The issue is not that the existence of such observers is ruled out by data, but that the theories that predict them are cognitively unstable: they cannot simultaneously be true and justifiably believed.
Syndicated copies to: