Cytoscape is an open source software platform for visualizing complex networks and integrating these with any type of attribute data. A lot of Apps are available for various kinds of problem domains, including bioinformatics, social network analysis, and semantic web.
Tag: network theory
Reply to Aaron Davis about links
<link>
hidden in the text maybe?)
I’ve been in the habit of person-tagging people in posts to actively send them webmentions, but I also have worried about the extra “visual clutter” and cognitive load of the traditional presentation of links as mentioned by John. (If he wasn’t distracted by the visual underlines indicating links, he might have been as happy?) As a result, I’m now considering adding some CSS to my site so that some of these webmention links simply look like regular text. This way the notifications will be triggered, but without adding the seeming “cruft” visually or cognitively. Win-win? Thanks for the inspiration!
In your case here, you’ve kindly added enough context about what to expect about the included links that the reader can decide for themselves while still making your point. You should sleep easily on this point and continue linking to your heart’s content.
In some sense, I think that the more links the better. I suspect the broader thesis of Cesar Hidalgo’s book Why Information Grows: The Evolution of Order, from Atoms to Economies would give you some theoretical back up for the idea.
Highlights, Quotes, Annotations, & Marginalia from Linked: The New Science Of Network by Albert-László Barabási
Highlights, Quotes, Annotations, & Marginalia
Guide to highlight colors
Yellow–general highlights and highlights which don’t fit under another category below
Orange–Vocabulary word; interesting and/or rare word
Green–Reference to read
Blue–Interesting Quote
Gray–Typography Problem
Red–Example to work through
…the high barriers to becoming a Christian had to be abolished. Circumcision and the strict food laws had to be relaxed.
make it easier to create links!
But when you add enough links such that each node has an average of one link, a miracle happens: A unique giant cluster emerges.
Random network theory tells us that as the average number of links per node increases beyond the critical one, the number of nodes left out of the giant cluster decreases exponentially.
If the network is large, despite the links’ completely random placement, almost all nodes will have approximately the same number of links.
seminal 1959 paper of Erdős and Rényi to bookmark
“On Random Graphs. I” (PDF). Publicationes Mathematicae. 6: 290–297.
In Igy irtok ti, or This is How You Write, Frigyes Karinthy
But there is one story, entitled “Lancszemek,” or “Chains,” that deserves our attention
Karinthy’s 1929 insight that people are linked by at most five links was the first published appearance of the concept we know today as “six degrees of separation.”
He [Stanley Milgram] did not seem to have been aware of the body of work on networks in graph theory and most likely had never heard of Erdős and Rényi. He is known to have been influenced by the work of Ithel de Sole Pool of MIT and Manfred Kochen of IBM, who circulated manuscripts about the small world problem within a group of colleagues for decades without publishing them, because they felt they had never “broken the back of the problem.”
Think about the small world problem of published research.
We don’t have a social search engine so we may never know the real number with total certainty.
Facebook has fixed this in the erstwhile. As of 2016 it’s down to 3.57 degrees of separation
social network
google the n-gram of this word to see it’s incidence over time. How frequent was it when this book was written? It was apparently a thing beginning in the mid 1960’s.
Mark Newman, a physicist at the Santa Fe Institute… had already written several papers on small worlds that are now considered classics.
Therefore, Watts and Strogatz’s most important discovery is that clustering does not stop at the boundary of social networks.
To explain the ubiquity of clustering in most real networks, Watts and Strogatz offered an alternative to Erdős and Rényi’s random network model in their 1998 study published in Nature.
Watts, D. J.; Strogatz, S. H. (1998). “Collective dynamics of ‘small-world’ networks” (PDF). Nature. 393 (6684): 440–442. Bibcode:1998Natur.393..440W. doi:10.1038/30918. PMID 9623998
The most intriguing result of our Web-mapping project was the complete absence of democracy, fairness, and egalitarian values on the Web. We learned that the topology of the Web prevents us from seeing anything but a mere handful of the billion documents out there.
Do Facebook and Twitter subvert some of this effect? What types of possible solutions could this give to the IndieWeb for social networking models with healthier results?
On the Web, the measure of visibility is the number of links. The more incoming links pointing to your Webpage, the more visible it is. […] Therefore, the liklihood that a typical document links to your Webpage is close to zero.
The hubs are the strongest argument against the utopian vision of an egalitarian cyberspace. […] In a collective manner, we somehow create hubs, Websites to which everyone links. They are very easy to find, no matter where you are on the Web. Compared to these hubs, the rest of the Web is invisible.
Every four years the United States inaugurates a new social hub–the president.
But every time an 80/20 rule truly applies, you can bet that there is a power law behind it. […] Power laws rarely emerge in systems completely dominated bya roll of the dice. Physicists have learned that most often they signal a transition from disorder to order.
If the disorder to order is the case, then what is the order imposed by earthquakes which apparently work on a power law distribution?
Leo Kadanoff, a physicist at the University of Illinois at Urbana, had a sudden insight: In the vicinity of the critical point we need to stop viewing atoms separately. Rather, they should be considered communities that act in unison. Atoms must be replaced by boxes of atoms such that within each box all atoms behave as one.
#phase transitions
Kenneth Wilson […] submitted simultaneously on June 2, 1971, and published in November of the same year by Physical Review B, turned statistical physics around. The proposed an elegant and all-encompassing theory of phase transitions. Wilson took the scaling ideas developed by Kadanoff and molded them into a powerful theory called renormalization. The starting point of his approach was scale invariance: He assumed that in the vicinity of the critical point the laws of physics applied in an identical manner at all scales, from single atoms to boxes containing millions of identical atoms acting in unison. By giving rigorous mathematical foundation to scale invariance, his theory spat out power laws each time he approached the critical point, the place where disorder makes room for order.
The random model of Erdős and Rényi rests on two simple and often disregarded assumptions. First, we start with an inventory of nodes. Having all the nodes available from the beginning, we assume that the number of nodes is fixed and remains unchanged throughout the network’s life. Second, all nodes are equivalent. Unable to distinguish between the nodes, we link them randomly to each other. These assumptions were unquestioned in over forty years of network research.
Both in the Erdős-Rényi and Watts-Strogatz models assumed that we have a fixed number of nodes that are wired together in some clever way. The networks generated by these models are therefore static, meaning that the number of nodes remains unchanged during the network’s life. In contrast, our examples suggested that for real networks the static hypothesis is not appropriate. Instead, we should incorporate growth into our network models.
It demonstrated, however, that growth alone cannot explain the emergence of power laws.
They are hubs. The better known they are, the more links point to them. The more links they attract, the easier it is to find them on the Web and so the more familiar we are with them. […] The bottom line is that when deciding where to link on the Web, we follow preferential attachment: When choosing between two pages, one with twice as many links as the other, about twice as many people link to the more connected page. While our individual choices are highly unpredictable, as a group we follow strict patterns.
The model is very simple, as growth and preferential attachment lead to an algorithm defined by two straightforward rules:
A. Growth: For each given period of time we add a new node to the network. This step underscores the fact that networks are assembled one node at a time.
B. Preferential attachment: We assume that each new node connects to the existing nodes with two links. The probability that it will chose a given node is proportional to the numver of links the chosen node has. That is, given the choice between two nodes, one with twice as many links as the other, it is twice as likely that the new node will connect to the more connected node.
The how and why remain for each are of application though.
In Hollywood, 94 percent of links are internal, formed when two established actors work together for the first time.
These shifts in thinking created a set of opposites: static versus growing, random versus scale-free, structure versus evolution.
[…] Does the presence of power laws imply that real networks are the result of a phase transition from disorder to order? The answer we’ve arrived at is simple: Networks are not en route from a random to an ordered state. Neither are they at the edge of randomness and chaos. Rather, the scale-free topology is evidence of organizing principles acting at each stage of the network formation process. There is little mystery here, since growth and preferential attachment can explain the basic features of the networks see in nature. No matter how large and complex a network becomes, as long as preferential attachment and growth are present it will maintain its hub-dominated scale-free topology.
The introduction of fitness does not eliminate growth and preferential attachment, the two basic mechanisms governing network evolution. It changes, however, what is considered attractive in a competitive environment. In the scale-free model, we assumed that a node’s attractiveness was determined solely by it’s number of links. In a competitive environment, fitness also plays a role: Nodes with higher fitness are linked to more frequently. A simple way to incorporate fitness into the scal-free model is to assume that preferential attachment is driven by the product of the node’s fitness and the number of links it has. Each new node decides where to link by comparing the fitness connectivity product of all available nodes and linking with a higher probability to those that have a higher product and therefore are more attractive.
Bianconi’s calculation s first confirmed our suspicion that in the presence of fitness the early bird is not necessarily the winner. Rather, fitness is in the driver’s seat, making or breaking the hubs.
But there was a indeed a precise mathematical mapping between the fitness model of a Bose gas. According to this mapping, each node in the network corresponds to an energy level in the Bose gas.
…in some networks, the winner can take all. Just as in a Bose-Einstein condensate all particles crowd into the the lowest energy level, leaving the rest of the energy levels unpopulated, in some networks the fittest node could theoretically grab all the links, leaving none for the rest of the nodes. The winner takes all.
But even though each system, from the Web to Hollywood, has a unique fitness distribution, Bianconi’s calculation indicated that in terms of topology all networks fall into one of only two possible categories. […] The first category includes all networks in which, despite the fierce competition for links, the scale-free topology survives. These networks display a fit-get-rich behavior, meaning that the fittest node will inevitably grow to beome the biggest hub. The winner’s lead is never significant, however. The largest hub is closely followed by a smaller one, which acquires almost as many links as the fittest node. Ata any moment we have a hierarchy of nodes whose degree distribution follows a power law. In most complex networks, the power laws and the fight for links thus are not antagonistic but can coexist peacefully.
In […] the second category, the winner takes all, meaning tht the fittest node grabs all the links, leaving very little for the rest of the nodes. Such networks develop a star topology. […] A winner-takes-all network is not scale-free.
…the western blackout highlighted an often ignored property of complex networks: vulnerability due to interconnectivity
Yet, if the number of removed nodes reaches a critical point, the system abruptly breaks into tiny unconnected islands.
Computer simulations we performed on networks generated by the scale-free model indicated that a significant fraction of nodes can be randomly removed from any scale-free network without its breaking apart.
…percolation theory, the field of physics that developed a set of tools that now are widely used in studies of random networks.
…they set out to calculate the fraction of nodes that must be removed from an arbitrarily chosen network, random or scale-free, to break it into pieces. On one hand, their calculation accounted for the well-known result that random networks fall apart after a critical number of nodes have been removed. On the other hand, they found that for scale-free networks the critical threshold disapears in cases where the degree exponent is smaller or equal to three.
Disable a few of the hubs and a scale-free network will fall to pieces in no time.
If, however, a drug or an illness shuts down the genes encoding the most connected proteins, the cell will not survive.
Obviously, the likelihood that a local failure will handicap the whole system is much higher if we perturb the most-connected nodes. This was supported by the findings of Duncan Watts, from Columbia University, who investigated a model designed to capture the generic features of cascading failures, such as power outages, and the opposite phenomenon, the cascading popularity of books, movies, and albums, which can be described within the same framework.
If a new product passes the crucial test of the innovators, based on their recommendation, the early adopters will pick it up.
What, if any, role is played by the social network in the spread of a virus or an innovation?
In 1954, Elihu Katz, a researcher at the Bureau of Applied Social Research at columbia University, circulated a proposal to study the effect of social ties on behavior.
When it came to the spread of tetracyclin, the doctors named by three or more other doctors as friends were three times more likely to adopt the new drug than those who had not been named by anybody.
Hubs, often referred to in marketing as “opinion leaders,” “power users,” or “influencers,” are individuals who communicate with more people about a certain product than does the average person.
Aiming to explain the disappearance of some fads and viruses and the spread of others, social scientists and epidemiologists developed a very useful tool called the threshold model.
any relation to Granovetter?
…critical threshold, a quantity determined by the properties of the network in which the innovation spreads.
For decades, a simple but powerful paradigm dominated our treatment of diffusion problems. If we wanted to estimate the probability that an innovation would spread, we needed only to know it’s spreading rate and the critical threshold it faced. Nobody questioned this paradigm. Recently, however, we have learned that some viruses and innovations are oblivious to it.
On the Internet, computers are not connected to each other randomly.
In scale-free networks the epidemic threshold miraculously vanished!
Hubs are among the first infected thanks to their numerous sexual contacts. Once infected, they quickly infect hundreds of others. If our sex web formed a homogeneous, random, network, AIDS might have died out long ago. The scale-free topology at AIDS’s disposal allowed the virus to spread and persist.
As we’ve established, hubs play a key role in these processes. Their unique role suggest a bold but cruel solution: As long as resources are finite we should treat only the hubs. That is, when a treatment exists but there is not enough money to offer it to everybody who needs it, we should primarily give it to the hubs. (Pastor-Satorras and Vespignani; and Zoltan Dezso)
Are we prepared to abandon the less connected patients for the benefit of the population at large?
They [Michalis Faloutsos, Petros Faloutsos, and Christos Faloutsos] found that the connectivity distribution of the Internet routers follows a power law. In their seminar paper “On Power-Law Relationship of the Internet Topology” they showed that the Internet […] is a scale-free network.
Routers offering more bandwidth likely have more links as well. […] This simple effect is a possible source of preferential attachment. We do not know for sure whether it is the only one, but preferential attachment is unquestionably present on the Internet.
After many discussions and tutorials on how computers communicate, a simple but controversial idea emerged: parasitic computing.
Starting from any page (on the Internet), we can reach only about 24 percent of all documents.
If you want to go from A to D, you can start from node A, then go to node B, which has a link to node C, which points to D. But you can’t make a round-trip.
Not necessarily the case with bidirectional webmentions.
[Cass] Sustein fears that by limiting access to conflicting viewpoints, the emerging online universe encourages segregation and social fragmentation. Indeed, the mechanisms behind social and political isolation on the Web are self-reinforcing.
Looks like we’ve known this for a very long time! Sadly it’s coming to a head in the political space of 2016 onward.
Communities are essential components of human social history. Granovetter’s circles of friends, the elementary building blocks of communities, pointed to this fact. […]
early indications that Facebook could be a thing…
One reason is that there are no sharp boundaries between various communities. Indeed, the same Website can belong simultaneously to different groups. For example, a physicist’s Webpage might mix links to physics, music, and mountain climbing, combining professional interests with hobbies. In which community should we place such a page? The size of communities also varies a lot. For example, while the community interested in “cryptography” is small and relatively easy to locate, the one consisting of devotees of “English literature” is much harder to identify and fragmented into many subcommunities ranging from Shakespeare enghusiasts to Kurt Vonnegut fans.
Search for this type of community problem is an NP complete problem. This section may be of interest to Brad Enslen and Kicks Condor. Cross reference research suggested by Gary Flake, Steve Lawrence, and Lee Giles from NEC.
Such differences in the structure of competing communities have important consequences for their ability to market and organize themselves for a common cause.
He continues to talk about how the pro-life movement is better connected and therefore better equipped to fight against the pro-choice movement.
Code–or software–is the bricks and mortar of cyberspace. The architecture is what we build, using the code as building blocks. The great architects of human history, from Michelangelo to Frank Lloyd Wright, demonstrated that, whereas raw materials are limited, the architectural possibilities are not. Code can curtail behavior, and it does influence architecture. It does not uniquely determine it, however.
Added on November 3, 2018 at 5:26 PM
Yes, we do have free speech on the Web. Chances are, however, that our voices are too weak to be heard. pages with only a few incoming links are impossible to find by casual browsing. Instead, over and over we are steered toward the hubs. It is tempting to believe that robots can avoid this popularity-driven trap.
Facebook and Twitter applications? Algorithms help to amplify “unheard” voices to some extent, but gamifying the reading can also get people to read more (crap) than they were reading before because it’s so easy.
Your ability to find my Webpage is determined by one factor only: its position on the Web.
Facebook takes advantage of this with their algorithm
Thus the Web’s large-scale topology–that is, its true architecture–enforces more severe limitations on our behavior and visibilityon the Web than government or industry could ever achieve by tinkering with the code. Regulations come and go, but the topology and the fundamental natural laws governing it are time invariant. As long as we continue to delegate to the individual the choice of where to link, we will not be able to significantly alter the Web’s large-scale topology, and we will have to live with the consequences.
hmmm?
After selling Alexa to Amazon.com in 1999
Brewster Kahle’s Alexa Internet company is apparently the root of the Amazon Alexa?
To return to our car analogy, it is…
Where before? I don’t recall this at all. Did it get removed from the text?
ref somewhere about here… personalized medicine
After researching the available databases, we settled on a new one, run by the Argonne National Laboratory outside Chicago, nicknamed “What Is There?” which compiled the matabolic network of forty-three diverse organisms.
…for the vast majority of organisms the ten most-connected molecules are the same. Adenosine triphosphate (ATP) is almost always the biggest hub, followed closely by adenosine diphosphate (ADP) and water.
A key prediction of the scale-free model is that nodes with a large number of links are those that have been added early to the network. in terms of metabolism this would imply that the most connected molecules should be the oldest ones within the cell. […] Therefore, the first mover advantage seems to pervade the emergence of life as well.
Comparing the metabolic network of all forty-three organisms, we found that only 4 percent of the molecules appear in all of them.
Developed by Stanley Fields in 1989, the two-hybrid method offers a relatively rapid semiautomated technique for detecting protein-protein interactions.
They [the results of work by Oltvai, Jeong, Barabasi, Mason (2000)] demonstrated that the protein interaction network has a scale-free topology.
…the cell’s scale-free topology is a result of a common mistake cells make while reproducing.
In short, it is now clear that the number of genes is not proportional to our perceived complexity.
We have learned that a sparse network of a few powerful directors controls all major appointments in Fortune 1000 companies; […]
Regardless of industry and scope, the network behind all twentieth century corporations has the same structure: It is a tree, where the CEO occupies the root and the bifurcating branches represent the increasingly specialized and nonoverlapping tasks of lower-level managers and workers. Responsibility decays as you move down the branches, ending with the drone executors of orders conceived at the roots.
Only for completely top down , but what about bottom up or middle out?
We have gotten to the point that we can produce anything that we can dream of. The expensive question now is, what should that be?
It is a fundamental rethinking of how to respond to the new business environment in the postindustrial era, dubbed the information economy.
This is likely late, but certainly an early instance of “information economy” in popular literature.
Therefore, companies aiming to compete in a fast-moving marketplace are shifting from a static and optimized tree into a dynamic and evolving web, offering a more malleable, flexible command structure.
While 79 percent of directors serve on only one board, 14 percent serve on two, and about 7 percent serve on three or more.
Indeed, the number of companies that entered in partnership with exactly k other institutions, representing the number of links they have within the network, followed a power law, the signature of a scale-free topology.
Makes me wonder if the 2008 economic collapse could have been predicted by “weak” links?
As research, innovation, product development, and marketing become more and more specialized and divorced from each other, we are converging to a network economy in which strategic alliances and partnerships are the means for survival in all industries.
This is troubling in the current political climate where there is little if any trust or truth being spread around by the leader of the Republican party.
As Walter W. Powell writes in Neither Market nor Hierarchy: Network Forms of Organization, “in markets the standard strategy is to drive the hardest possible bargain on the immediate exchange. In networks, the preferred option is often creating indebtedness and reliance over the long haul.” Therefore, in a network economy, buyers and suppliers are not competitors but partners. The relationship between them is often very long lasting and stable.
Trump vs. Trump
The stability of these links allows companies to concentrate on their core business. If these partnerships break down, the effects can be severe. Most of the time failures handicap only the partners of the broken link. Occasionally, however, they send ripples through the whole economy. As we will see next, macroeconomic failures can throw entire nations into deep financial disarray, while failures in corporate partnerships can severly damage the jewels of the new economy.
In some sense this predicts the effects of the 2008 downturn.
outsourcing
early use of the word?
A me attitude, where the companies immediate financial balance is the only factor, limits network thinking. Not understanding how the actions of one node affect other nodes easily cripples whole segments of the network.
Hierarchical thinking does not fit a network economy.
We must help eliminate the need and desire of the nodes to form links to terrorist organizations by offering them a chance to belong to more constructive and meaningful webs.
And for poverty and gangs as well as immigration.
“Their work has a powerful philosophy: “revelation through concealment.” By hiding the details they allow us to focus entirely on the form. The wrapping sharpens our vision, making us more aware and observant, turning ordinary objects into monumental sculptures and architectural pieces.
not too dissimilar to the font I saw today for memory improvement
🔖 [1810.05095] The Statistical Physics of Real-World Networks | arXiv
Statistical physics is the natural framework to model complex networks. In the last twenty years, it has brought novel physical insights on a variety of emergent phenomena, such as self-organisation, scale invariance, mixed distributions and ensemble non-equivalence, which cannot be deduced from the behaviour of the individual constituents. At the same time, thanks to its deep connection with information theory, statistical physics and the principle of maximum entropy have led to the definition of null models reproducing some features of empirical networks, but otherwise as random as possible. We review here the statistical physics approach for complex networks and the null models for the various physical problems, focusing in particular on the analytic frameworks reproducing the local features of the network. We show how these models have been used to detect statistically significant and predictive structural patterns in real-world networks, as well as to reconstruct the network structure in case of incomplete information. We further survey the statistical physics frameworks that reproduce more complex, semi-local network features using Markov chain Monte Carlo sampling, and the models of generalised network structures such as multiplex networks, interacting networks and simplicial complexes.
Comments: To appear on Nature Reviews Physics. The revised accepted version will be posted 6 months after publication
📖 Read pages 93-112 of 288 of Linked: The New Science Of Networks by Albert-László Barabási
An interesting overlap of Bose condensation mathematics and physics into network theory.
SFI and ASU to offer online M.S. in Complexity | Complexity Explorer
SFI and Arizona State University soon will offer the world’s first comprehensive online master’s degree in complexity science. It will be the Institute’s first graduate degree program, a vision that dates to SFI’s founding. “With technology, a growing recognition of the value of online education, widespread acceptance of complexity science, and in partnership with ASU, we are now able to offer the world a degree in the field we helped invent,” says SFI President David Krakauer, “and it will be taught by the very people who built it into a legitimate domain of scholarship.”
A breakdown of purchasing habits shows where science books fall on the political spectrum.
👓 Chris Aldrich is reading “How to Succeed in the Networked World”
The world’s connections have become more important than its divisions. To reap the rewards and avoid the pitfalls of this new order, the United States needs to adopt a grand strategy based on three pillars: open societies, open governments, and an open international system.
This article also definitely seems to take a broader historical approach to the general topics and is nearly close enough in philosophy that I might even begin considering it as a policy case with a Big History point of view.
Highly recommend.
Highlights, Quotes, & Marginalia
Think of a standard map of the world, showing the borders and capitals of the world’s 190-odd countries. That is the chessboard view.Now think of a map of the world at night, with the lit-up bursts of cities and the dark swaths of wilderness. Those corridors of light mark roads, cars, houses, and offices; they mark the networks of human relationships, where families and workers and travelers come together. That is the web view. It is a map not of separation, marking off boundaries of sovereign power, but of connection.
…the Westphalian world order mandated the sovereign equality of states not as an end in itself but as a means to protect the subjects of those states—the people.
The people must come first. Where they do not, sooner or later, they will overthrow their governments.
Open societies, open governments, and an open international system are risky propositions. But they are humankind’s best hope for harnessing the power not only of states but also of businesses, universities, civic organizations, and citizens to address the planetary problems that now touch us all.
…when a state abrogated its responsibility to protect the basic rights of its people, other states had a responsibility to protect those citizens, if necessary through military intervention.
But human rights themselves became politically polarized during the Cold War, with the West championing civil and political rights; the East championing economic, social, and cultural rights; and both sides tending to ignore violations in their client states.
The institutions built after World War II remain important repositories of legitimacy and authority. But they need to become the hubs of a flatter, faster, more flexible system, one that operates at the level of citizens as well as states.
U.S. policymakers should think in terms of translating chessboard alliances into hubs of connectedness and capability.
According to systems theory, the level of organization in a closed system can only stay the same or decrease. In open systems, by contrast, the level of organization can increase in response to new inputs and disruptions. That means that such a system should be able to ride out the volatility caused by changing power relationships and incorporate new kinds of global networks.
Writing about “connexity” 20 years ago, the British author and political adviser Geoff Mulgan argued that in adapting to permanent interdependence, governments and societies would have to rethink their policies, organizational structures, and conceptions of morality. Constant connectedness, he wrote, would place a premium on “reciprocity, the idea of give and take,” and a spirit of openness, trust, and transparency would underpin a “different way of governing.” Governments would “provide a framework of predictability, but leave space for people to organise themselves in flatter, more reciprocal structures.”
Instead of governing themselves through those who represent them, citizens can partner directly with the government to solve public problems.
…an open international order of the twenty-first century should be anchored in secure and self-reliant societies, in which citizens can participate actively in their own protection and prosperity. The first building block is open societies; the second is open governments.
The self-reliance necessary for open security depends on the ability to self-organize and take action.
The government’s role is to “invest in creating a more resilient nation,” which includes briefing and empowering the public, but more as a partner than a protector.
…much of the civil rights work of this century will entail championing digital rights.
Hard gatekeeping is a strategy of connection, but it calls for division, replacing the physical barriers of the twentieth century with digital ones of the twenty-first.
In this order, states must be waves and particles at the same time.
The legal order of the twenty-first century must be a double order, acknowledging the existence of domestic and international spheres of action and law but seeing the boundary between them as permeable.
In many countries, legislatures and government agencies have begun publishing draft legislation on open-source platforms such as GitHub, enabling their publics to contribute to the revision process.
The declaration’s three major principles are transparency, civic participation, and accountability.
Calculating the Middle Ages?
The project "Complexities and networks in the Medieval Mediterranean and Near East" (COMMED) at the Division for Byzantine Research of the Institute for Medieval Research (IMAFO) of the Austrian Academy of Sciences focuses on the adaptation and development of concepts and tools of network theory and complexity sciences for the analysis of societies, polities and regions in the medieval world in a comparative perspective. Key elements of its methodological and technological toolkit are applied, for instance, in the new project "Mapping medieval conflicts: a digital approach towards political dynamics in the pre-modern period" (MEDCON), which analyses political networks and conflict among power elites across medieval Europe with five case studies from the 12th to 15th century. For one of these case studies on 14th century Byzantium, the explanatory value of this approach is presented in greater detail. The presented results are integrated in a wider comparison of five late medieval polities across Afro-Eurasia (Byzantium, China, England, Hungary and Mamluk Egypt) against the background of the {guillemotright}Late Medieval Crisis{guillemotleft} and its political and environmental turmoil. Finally, further perspectives of COMMED are outlined.
Network and Complexity Theory Applied to History
This interesting paper (summary below) appears to apply network and complexity science to history and is sure to be of interest to those working at the intersection of some of these types of interdisciplinary studies. In particular, I’d be curious to see more coming out of this type of area to support theses written by scholars like Francis Fukuyama in the development of societal structures. Those interested in the emerging area of Big History are sure to enjoy this type of treatment. I’m also curious how researchers in economics (like Cesar Hidalgo) might make use of available(?) historical data in such related analyses. I’m curious if Dave Harris might consider such an analysis in his ancient Near East work?
Those interested in a synopsis of the paper might find some benefit from an overview from MIT Technology Review: How the New Science of Computational History Is Changing the Study of the Past.
To understand the structure of a large-scale biological, social, or technological network, it can be helpful to decompose the network into smaller subunits or modules. In this article, we develop an information-theoretic foundation for the concept of modularity in networks. We identify the modules of which the network is composed by finding an optimal compression of its topology, capitalizing on regularities in its structure. We explain the advantages of this approach and illustrate them by partitioning a number of real-world and model networks.
https://doi.org/10.1073/pnas.0611034104