Introduction to Galois Theory | Coursera

Bookmarked Introduction to Galois Theory by Ekaterina Amerik (Coursera)
A very beautiful classical theory on field extensions of a certain type (Galois extensions) initiated by Galois in the 19th century. Explains, in particular, why it is not possible to solve an equation of degree 5 or more in the same way as we solve quadratic or cubic equations. You will learn to compute Galois groups and (before that) study the properties of various field extensions. We first shall survey the basic notions and properties of field extensions: algebraic, transcendental, finite field extensions, degree of an extension, algebraic closure, decomposition field of a polynomial. Then we shall do a bit of commutative algebra (finite algebras over a field, base change via tensor product) and apply this to study the notion of separability in some detail. After that we shall discuss Galois extensions and Galois correspondence and give many examples (cyclotomic extensions, finite fields, Kummer extensions, Artin-Schreier extensions, etc.). We shall address the question of solvability of equations by radicals (Abel theorem). We shall also try to explain the relation to representations and to topological coverings. Finally, we shall briefly discuss extensions of rings (integral elemets, norms, traces, etc.) and explain how to use the reduction modulo primes to compute Galois groups.

I’ve been watching MOOCs for several years and this is one of the few I’ve come across that covers some more advanced mathematical topics. I’m curious to see how it turns out and what type of interest/results it returns.

It’s being offered by National Research University – Higher School of Economics (HSE) in Russia.

Syndicated copies to:

[1609.02422] What can logic contribute to information theory?

Bookmarked [1609.02422] What can logic contribute to information theory? by David Ellerman (128.84.21.199)
Logical probability theory was developed as a quantitative measure based on Boole's logic of subsets. But information theory was developed into a mature theory by Claude Shannon with no such connection to logic. But a recent development in logic changes this situation. In category theory, the notion of a subset is dual to the notion of a quotient set or partition, and recently the logic of partitions has been developed in a parallel relationship to the Boolean logic of subsets (subset logic is usually mis-specified as the special case of propositional logic). What then is the quantitative measure based on partition logic in the same sense that logical probability theory is based on subset logic? It is a measure of information that is named "logical entropy" in view of that logical basis. This paper develops the notion of logical entropy and the basic notions of the resulting logical information theory. Then an extensive comparison is made with the corresponding notions based on Shannon entropy.

Ellerman is visiting at UC Riverside at the moment. Given the information theory and category theory overlap, I’m curious if he’s working with John Carlos Baez, or what Baez is aware of this.

Based on a cursory look of his website(s), I’m going to have to start following more of this work.

Syndicated copies to:

Randomness And Complexity, from Leibniz To Chaitin | World Scientific Publishing

Bookmarked Randomness And Complexity, from Leibniz To Chaitin (amzn.to)
The book is a collection of papers written by a selection of eminent authors from around the world in honour of Gregory Chaitin s 60th birthday. This is a unique volume including technical contributions, philosophical papers and essays. Hardcover: 468 pages; Publisher: World Scientific Publishing Company (October 18, 2007); ISBN: 9789812770820
Syndicated copies to:

The Science of the Oven (Arts and Traditions of the Table: Perspectives on Culinary History)

Bookmarked The Science of the Oven (Arts and Traditions of the Table: Perspectives on Culinary History) by Hervé This (Amazon.com)
The Science of the Oven
Hervé This
Cooking
Columbia University Press
2009
Hardcover
206
Personal library

Mayonnaise "takes" when a series of liquids form a semisolid consistency. Eggs, a liquid, become solid as they are heated, whereas, under the same conditions, solids melt. When meat is roasted, its surface browns and it acquires taste and texture. What accounts for these extraordinary transformations? The answer: chemistry and physics. With his trademark eloquence and wit, Hervé This launches a wry investigation into the chemical art of cooking. Unraveling the science behind common culinary technique and practice, Hervé This breaks food down to its molecular components and matches them to cooking's chemical reactions. He translates the complex processes of the oven into everyday knowledge for professional chefs and casual cooks, and he demystifies the meaning of taste and the making of flavor. He describes the properties of liquids, salts, sugars, oils, and fats and defines the principles of culinary practice, which endow food with sensual as well as nutritional value.

For fans of Hervé This's popular volumes and for those new to his celebrated approach, The Science of the Oven expertly expands the possibilities of the kitchen, fusing the physiology of taste with the molecular structure of bodies and food.

Blogs

Syndicated copies to:

NIMBioS Tutorial: Evolutionary Quantitative Genetics 2016

Bookmarked NIMBioS Tutorial: Evolutionary Quantitative Genetics 2016 by NIMBioS (nimbios.org)
This tutorial will review the basics of theory in the field of evolutionary quantitative genetics and its connections to evolution observed at various time scales. Quantitative genetics deals with the inheritance of measurements of traits that are affected by many genes. Quantitative genetic theory for natural populations was developed considerably in the period from 1970 to 1990 and up to the present, and it has been applied to a wide range of phenomena including the evolution of differences between the sexes, sexual preferences, life history traits, plasticity of traits, as well as the evolution of body size and other morphological measurements. Textbooks have not kept pace with these developments, and currently few universities offer courses in this subject aimed at evolutionary biologists. There is a need for evolutionary biologists to understand this field because of the ability to collect large amounts of data by computer, the development of statistical methods for changes of traits on evolutionary trees and for changes in a single species through time, and the realization that quantitative characters will not soon be fully explained by genomics. This tutorial aims to fill this need by reviewing basic aspects of theory and illustrating how that theory can be tested with data, both from single species and with multiple-species phylogenies. Participants will learn to use R, an open-source statistical programming language, to build and test evolutionary models. The intended participants for this tutorial are graduate students, postdocs, and junior faculty members in evolutionary biology.

Syndicated copies to:

Network Science by Albert-László Barabási

Bookmarked Network Science by Albert-László Barabási (Cambridge University Press)

I ran across a link to this textbook by way of a standing Google alert, and was excited to check it out. I was immediately disappointed to think that I would have to wait another month and change for the physical textbook to be released, but made my pre-order directly. Then with a bit of digging around, I realized that individual chapters are available immediately to quench my thirst until the physical text is printed next month.

The textbook is available for purchase in September 2016 from Cambridge University Press. Pre-order now on Amazon.com.

Syndicated copies to:

Disconnected, Fragmented, or United? A Trans-disciplinary Review of Network Science

Bookmarked Disconnected, Fragmented, or United? A Trans-disciplinary Review of Network Science by César A. Hidalgo (Applied Network Science | SpringerLink)

Abstract

During decades the study of networks has been divided between the efforts of social scientists and natural scientists, two groups of scholars who often do not see eye to eye. In this review I present an effort to mutually translate the work conducted by scholars from both of these academic fronts hoping to continue to unify what has become a diverging body of literature. I argue that social and natural scientists fail to see eye to eye because they have diverging academic goals. Social scientists focus on explaining how context specific social and economic mechanisms drive the structure of networks and on how networks shape social and economic outcomes. By contrast, natural scientists focus primarily on modeling network characteristics that are independent of context, since their focus is to identify universal characteristics of systems instead of context specific mechanisms. In the following pages I discuss the differences between both of these literatures by summarizing the parallel theories advanced to explain link formation and the applications used by scholars in each field to justify their approach to network science. I conclude by providing an outlook on how these literatures can be further unified.

Syndicated copies to:

Peter Webb’s A Course in Finite Group Representation Theory

Bookmarked A Course in Finite Group Representation Theory by Peter Webb (math.umn.edu)
Download a pre-publication version of the book which will be published by Cambridge University Press. The book arises from notes of courses taught at the second year graduate level at the University of Minnesota and is suitable to accompany study at that level.

Syndicated copies to:

Ten Simple Rules for Taking Advantage of Git and GitHub

Bookmarked Ten Simple Rules for Taking Advantage of Git and GitHub (journals.plos.org)
Bioinformatics is a broad discipline in which one common denominator is the need to produce and/or use software that can be applied to biological data in different contexts. To enable and ensure the replicability and traceability of scientific claims, it is essential that the scientific publication, the corresponding datasets, and the data analysis are made publicly available [1,2]. All software used for the analysis should be either carefully documented (e.g., for commercial software) or, better yet, openly shared and directly accessible to others [3,4]. The rise of openly available software and source code alongside concomitant collaborative development is facilitated by the existence of several code repository services such as SourceForge, Bitbucket, GitLab, and GitHub, among others. These resources are also essential for collaborative software projects because they enable the organization and sharing of programming tasks between different remote contributors. Here, we introduce the main features of GitHub, a popular web-based platform that offers a free and integrated environment for hosting the source code, documentation, and project-related web content for open-source projects. GitHub also offers paid plans for private repositories (see Box 1) for individuals and businesses as well as free plans including private repositories for research and educational use.
Syndicated copies to:

Human Collective Memory from Biographical Data

Bookmarked Estimating technological breaks in the size and composition of human collective memory from biographical data (arxiv.org)

Syndicated copies to:

Design and Control of Self-organizing Systems

Bookmarked Design and Control of Self-organizing Systems by Carlos Gershenson (scifunam.fisica.unam.mx)

UNAM Mexico City has an available free download of Carlos Gershenson’s 2007 text.

Complex systems are usually difficult to design and control. There are several particular methods for coping with complexity, but there is no general approach to build complex systems. In this book I propose a methodology to aid engineers in the design and control of complex systems. This is based on the description of systems as self-organizing. Starting from the agent metaphor, the methodology proposes a conceptual framework and a series of steps to follow to find proper mechanisms that will promote elements to find solutions by actively interacting among themselves.

Syndicated copies to:

Calculating the Middle Ages?

Bookmarked Calculating the Middle Ages? The Project "Complexities and Networks in the Medieval Mediterranean and Near East" (COMMED) [1606.03433] (arxiv.org)
The project "Complexities and networks in the Medieval Mediterranean and Near East" (COMMED) at the Division for Byzantine Research of the Institute for Medieval Research (IMAFO) of the Austrian Academy of Sciences focuses on the adaptation and development of concepts and tools of network theory and complexity sciences for the analysis of societies, polities and regions in the medieval world in a comparative perspective. Key elements of its methodological and technological toolkit are applied, for instance, in the new project "Mapping medieval conflicts: a digital approach towards political dynamics in the pre-modern period" (MEDCON), which analyses political networks and conflict among power elites across medieval Europe with five case studies from the 12th to 15th century. For one of these case studies on 14th century Byzantium, the explanatory value of this approach is presented in greater detail. The presented results are integrated in a wider comparison of five late medieval polities across Afro-Eurasia (Byzantium, China, England, Hungary and Mamluk Egypt) against the background of the {guillemotright}Late Medieval Crisis{guillemotleft} and its political and environmental turmoil. Finally, further perspectives of COMMED are outlined.

Network and Complexity Theory Applied to History

This interesting paper (summary below) appears to apply network and complexity science to history and is sure to be of interest to those working at the intersection of some of these types of interdisciplinary studies. In particular, I’d be curious to see more coming out of this type of area to support theses written by scholars like Francis Fukuyama in the development of societal structures. Those interested in the emerging area of Big History are sure to enjoy this type of treatment. I’m also curious how researchers in economics (like Cesar Hidalgo) might make use of available(?) historical data in such related analyses. I’m curious if Dave Harris might consider such an analysis in his ancient Near East work?

Those interested in a synopsis of the paper might find some benefit from an overview from MIT Technology Review: How the New Science of Computational History Is Changing the Study of the Past.

Syndicated copies to:

The emotional arcs of stories are dominated by six basic shapes

Bookmarked The emotional arcs of stories are dominated by six basic shapes (arxiv.org)
Advances in computing power, natural language processing, and digitization of text now make it possible to study our a culture's evolution through its texts using a "big data" lens. Our ability to communicate relies in part upon a shared emotional experience, with stories often following distinct emotional trajectories, forming patterns that are meaningful to us. Here, by classifying the emotional arcs for a filtered subset of 1,737 stories from Project Gutenberg's fiction collection, we find a set of six core trajectories which form the building blocks of complex narratives. We strengthen our findings by separately applying optimization, linear decomposition, supervised learning, and unsupervised learning. For each of these six core emotional arcs, we examine the closest characteristic stories in publication today and find that particular emotional arcs enjoy greater success, as measured by downloads.
Syndicated copies to:

🔖 Paper: Paging Through History by Mark Kurlansky

Bookmarked Paper: Paging Through History by Mark Kurlansky (Amazon.com)
Paper is one of the simplest and most essential pieces of human technology. For the past two millennia, the ability to produce it in ever more efficient ways has supported the proliferation of literacy, media, religion, education, commerce, and art; it has formed the foundation of civilizations, promoting revolutions and restoring stability. One has only to look at history’s greatest press run, which produced 6.5 billion copies of Máo zhuxí yulu, Quotations from Chairman Mao Tse-tung (Zedong)―which doesn’t include editions in 37 foreign languages and in braille―to appreciate the range and influence of a single publication, in paper. Or take the fact that one of history’s most revered artists, Leonardo da Vinci, left behind only 15 paintings but 4,000 works on paper. And though the colonies were at the time calling for a boycott of all British goods, the one exception they made speaks to the essentiality of the material; they penned the Declaration of Independence on British paper. Now, amid discussion of “going paperless”―and as speculation about the effects of a digitally dependent society grows rampant―we’ve come to a world-historic juncture. Thousands of years ago, Socrates and Plato warned that written language would be the end of “true knowledge,” replacing the need to exercise memory and think through complex questions. Similar arguments were made about the switch from handwritten to printed books, and today about the role of computer technology. By tracing paper’s evolution from antiquity to the present, with an emphasis on the contributions made in Asia and the Middle East, Mark Kurlansky challenges common assumptions about technology’s influence, affirming that paper is here to stay. Paper will be the commodity history that guides us forward in the twenty-first century and illuminates our times.

🔖 Marked as “want to read” Paper: Paging Through History by Mark Kurlansky (W. W. Norton & Company; 1st edition, May 10, 2016; ISBN: 9780393239614)

Syndicated copies to:

Exhibition at BC Space | Amerikan Krazy: Life Out of Balance

Bookmarked Artists take aim at their country and their county by Antoine Boessenkool (The Orange County Register)
“Amerikan Krazy: Life Out of Balance” takes part of its name from the new book <a href="http://boffosockobooks.com/books/authors/henry-james-korn/amerikan-krazy/">"Amerikan Krazy”</a> by <a href="http://www.henryjameskorn.com">Henry James Korn</a>. From 2008 to 2013, Korn worked at the Orange County Great Park. He was responsible for the creation of the Palm Court arts complex and culture, music, art and history programs.<br /><br /> “The book is very much about total corporate control of public and private space,” Korn said. The story follows a wounded Marine veteran haunted after having missed the chance to assassinate a presidential candidate who later causes massive human suffering and wreaks havoc on America’s wealth and democracy.<br /><br /> It’s a way of understanding what’s happening in politics now, Korn said.<br /><br /> “Because if ever there was a recognition that our public life and politics have gone crazy, it’s this moment.”

If you haven’t manage to make it down, this exhibition is running for another week at BC Space!

Syndicated copies to: