Some personal thoughts and opinions on what ``good quality mathematics'' is, and whether one should try to define this term rigorously. As a case study, the story of Szemer'edi's theorem is presented.
This looks like a cool little paper.
Some thoughts after reading
And indeed it was. The opening has lovely long (though possibly incomplete) list of aspects of good mathematics toward which mathematicians should strive. The second section contains an interesting example which looks at the history of a theorem and it’s effect on several different areas. To me most of the value is in thinking about the first several pages. I highly recommend this to all young budding mathematicians.
In particular, as a society, we need to be careful of early students in elementary and high school as well as college as the pedagogy of mathematics at these lower levels tends to weed out potential mathematicians of many of these stripes. Students often get discouraged from pursuing mathematics because it’s “too hard” often because they don’t have the right resources or support. These students, may in fact be those who add to the well-roundedness of the subject which help to push it forward.
I believe that this diverse and multifaceted nature of “good mathematics” is very healthy for mathematics as a whole, as it it allows us to pursue many different approaches to the subject, and exploit many different types of mathematical talent, towards our common goal of greater mathematical progress and understanding. While each one of the above attributes is generally accepted to be a desirable trait to have in mathematics, it can become detrimental to a field to pursue only one or two of them at the expense of all the others.
As I look at his list of scenarios, it also reminds me of how areas within the humanities can become quickly stymied. The trouble in some of those areas of study is that they’re not as rigorously underpinned, as systematic, or as brutally clear as mathematics can be, so the fact that they’ve become stuck may not be noticed until a dreadfully much later date. These facts also make it much easier and clearer in some of these fields to notice the true stars.
As a reminder for later, I’ll include these scenarios about research fields:
A field which becomes increasingly ornate and baroque, in which individual
results are generalised and refined for their own sake, but the subject as a
whole drifts aimlessly without any definite direction or sense of progress;
A field which becomes filled with many astounding conjectures, but with no
hope of rigorous progress on any of them;
A field which now consists primarily of using ad hoc methods to solve a collection
of unrelated problems, which have no unifying theme, connections, or purpose;
A field which has become overly dry and theoretical, continually recasting and
unifying previous results in increasingly technical formal frameworks, but not
generating any exciting new breakthroughs as a consequence; or
A field which reveres classical results, and continually presents shorter, simpler,
and more elegant proofs of these results, but which does not generate any truly
original and new results beyond the classical literature.
This book offers a broad introduction to food policies in the United States. Real-world controversies and debates motivate the book’s attention to economic principles, policy analysis, nutrition science and contemporary data sources. It assumes that the reader's concern is not just the economic interests of farmers, but also includes nutrition, sustainable agriculture, the environment and food security. The book’s goal is to make US food policy more comprehensible to those inside and outside the agri-food sector whose interests and aspirations have been ignored.
The chapters cover US agriculture, food production and the environment, international agricultural trade, food and beverage manufacturing, food retail and restaurants, food safety, dietary guidance, food labeling, advertising and federal food assistance programs for the poor.
The author is an agricultural economist with many years of experience in the non-profit advocacy sector, the US Department of Agriculture and as a professor at Tufts University. The author's well-known blog on US food policy provides a forum for discussion and debate of the issues set out in the book.
In How to Have a Good Day, economist and former McKinsey partner Caroline Webb shows readers how to use recent findings from behavioral economics, psychology, and neuroscience to transform our approach to everyday working life.
Advances in these behavioral sciences are giving us ever better understanding of how our brains work, why we make the choices we do, and what it takes for us to be at our best. But it has not always been easy to see how to apply these insights in the real world – until now.
In How to Have a Good Day, Webb explains exactly how to apply this science to our daily tasks and routines. She translates three big scientific ideas into step-by-step guidance that shows us how to set better priorities, make our time go further, ace every interaction, be our smartest selves, strengthen our personal impact, be resilient to setbacks, and boost our energy and enjoyment. Through it all, Webb teaches us how to navigate the typical challenges of modern workplaces—from conflict with colleagues to dull meetings and overflowing inboxes—with skill and ease.
Filled with stories of people who have used Webb’s insights to boost their job satisfaction and performance at work, How to Have a Good Day is the book so many people wanted when they finished Nudge, Blink and Thinking Fast and Slow and were looking for practical ways to apply this fascinating science to their own lives and careers.
A remarkable and much-needed book, How to Have a Good Day gives us the tools we need to have a lifetime of good days.
— Fortune.com - 5 Business Books to Learn From
— Forbes - 16 New Books for Creative Learners
The author of the legendary bestseller Influence, social psychologist Robert Cialdini shines a light on effective persuasion and reveals that the secret doesn’t lie in the message itself, but in the key moment before that message is delivered.
What separates effective communicators from truly successful persuaders? Using the same combination of rigorous scientific research and accessibility that made his Influence an iconic bestseller, Robert Cialdini explains how to capitalize on the essential window of time before you deliver an important message. This “privileged moment for change” prepares people to be receptive to a message before they experience it. Optimal persuasion is achieved only through optimal pre-suasion. In other words, to change “minds” a pre-suader must also change “states of mind.”
His first solo work in over thirty years, Cialdini’s Pre-Suasion draws on his extensive experience as the most cited social psychologist of our time and explains the techniques a person should implement to become a master persuader. Altering a listener’s attitudes, beliefs, or experiences isn’t necessary, says Cialdini—all that’s required is for a communicator to redirect the audience’s focus of attention before a relevant action.
From studies on advertising imagery to treating opiate addiction, from the annual letters of Berkshire Hathaway to the annals of history, Cialdini draws on an array of studies and narratives to outline the specific techniques you can use on online marketing campaigns and even effective wartime propaganda. He illustrates how the artful diversion of attention leads to successful pre-suasion and gets your targeted audience primed and ready to say, “Yes.”
The world’s foremost expert on pricing strategy shows how this mysterious process works and how to maximize value through pricing to company and customer.
In all walks of life, we constantly make decisions about whether something is worth our money or our time, or try to convince others to part with their money or their time. Price is the place where value and money meet. From the global release of the latest electronic gadget to the bewildering gyrations of oil futures to markdowns at the bargain store, price is the most powerful and pervasive economic force in our day-to-day lives and one of the least understood.
The recipe for successful pricing often sounds like an exotic cocktail, with equal parts psychology, economics, strategy, tools and incentives stirred up together, usually with just enough math to sour the taste. That leads managers to water down the drink with hunches and rules of thumb, or leave out the parts with which they don’t feel comfortable. While this makes for a sweeter drink, it often lacks the punch to have an impact on the customer or on the business.
It doesn’t have to be that way, though, as Hermann Simon illustrates through dozens of stories collected over four decades in the trenches and behind the scenes. A world-renowned speaker on pricing and a trusted advisor to Fortune 500 executives, Simon’s lifelong journey has taken him from rural farmers’ markets, to a distinguished academic career, to a long second career as an entrepreneur and management consultant to companies large and small throughout the world. Along the way, he has learned from Nobel Prize winners and leading management gurus, and helped countless managers and executives use pricing as a way to create new markets, grow their businesses and gain a sustained competitive advantage. He also learned some tough personal lessons about value, how people perceive it, and how people profit from it.
In this engaging and practical narrative, Simon leaves nothing out of the pricing cocktail, but still makes it go down smoothly and leaves you wanting to learn more and do more―as a consumer or as a business person. You will never look at pricing the same way again.
A unique introduction to the theory of linear operators on Hilbert space. The author presents the basic facts of functional analysis in a form suitable for engineers, scientists, and applied mathematicians. Although the Definition-Theorem-Proof format of mathematics is used, careful attention is given to motivation of the material covered and many illustrative examples are presented.
60db seems like the start of what could be an interesting podcast/audio discovery app/engine. It has the appearance of wanting to be like Nuzzel for the audio space based on their announcement, but isn’t quite there yet based on my quick look through their site. On first blush it doesn’t seem much better than Huffduffer and doesn’t have a follower model of any sort, but perhaps that could change. Folks watching the podcasting and audio discovery space should keep an eye on it though.
Sadly, at least for now, the app appears to focus on short form audio (3-8 minutes in length) from major media content producers who are already syndicating audio in podcast format. I haven’t used the iOS (no Android app yet) app, but the web interface allows one to pick from a list of about 20 broad category options (news, sports, politics, kids, etc.) to “customize” one’s feed.
Hopefully in the future it may build itself out a bit more like Nuzzel by requesting data from one’s Facebook or Twitter feeds to better customize an algorithmic feed for better general audio discovery. Maybe it will allow a follower model based on social graph for improved discovery. One might also like to see custom settings for podcast story length, so one could choose between short hit audio, which they currently have in abundance, and longer form stories for lengthier commute times.
For the moment however, they seem to have recreated a slightly better and more portable version of news radio for the internet/mobile crowd. Perhaps future iterations will reveal more?
One of the most important use cases of ontologies is the calculation of similarity scores between a query and items annotated with classes of an ontology. The hierarchical structure of an ontology does not necessarily reflect all relevant aspects of the domain it is modelling, and this can reduce the performance of ontology-based search algorithms. For instance, the classes of phenotype ontologies may be arranged according to anatomical criteria, but individual phenotypic features may affect anatomic entities in opposite ways. Thus, "opposite" classes may be located in close proximity in an ontology; for example enlarged liver and small liver are grouped under abnormal liver size. Using standard similarity measures, these would be scored as being similar, despite in fact being opposites. In this paper, we use information about opposite ontology classes to extend two large phenotype ontologies, the human and the mammalian phenotype ontology. We also show that this information can be used to improve rankings based on similarity measures that incorporate this information. In particular, cosine similarity based measures show large improvements. We hypothesize this is due to the natural embedding of opposite phenotypes in vector space. We support the idea that the expressivity of semantic web technologies should be explored more extensively in biomedical ontologies and that similarity measures should be extended to incorporate more than the pure graph structure defined by the subclass or part-of relationships of the underlying ontologies.
Seymour Papert’s Mindstorms was published by Basic Books in 1980, and outlines his vision of children using computers as instruments for learning. A second edition, with new Forewords by John Sculley and Carol Sperry, was published in 1993. The book remains as relevant now as when first published almost forty years ago.
The Media Lab is grateful to Seymour Papert’s family for allowing us to post the text here. We invite you to add your comments and reflections.
If you are interested in purchasing the print edition of Mindstorms, please visit Basic Books.
A country's mix of products predicts its subsequent pattern of diversification and economic growth. But does this product mix also predict income inequality? Here we combine methods from econometrics, network science, and economic complexity to show that countries exporting complex products (as measured by the Economic Complexity Index) have lower levels of income inequality than countries exporting simpler products. Using multivariate regression analysis, we show that economic complexity is a significant and negative predictor of income inequality and that this relationship is robust to controlling for aggregate measures of income, institutions, export concentration, and human capital. Moreover, we introduce a measure that associates a product to a level of income inequality equal to the average GINI of the countries exporting that product (weighted by the share the product represents in that country's export basket). We use this measure together with the network of related products (or product space) to illustrate how the development of new products is associated with changes in income inequality. These findings show that economic complexity captures information about an economy's level of development that is relevant to the ways an economy generates and distributes its income. Moreover, these findings suggest that a country's productive structure may limit its range of income inequality. Finally, we make our results available through an online resource that allows for its users to visualize the structural transformation of over 150 countries and their associated changes in income inequality between 1963 and 2008.
Last night Tucker Hottes, Den Temple and I held the first Homebrew Website Club at The Keys in Scranton, PA. I really appreciate that HWC will force me to set aside some time to work on my personal site since it is often neglected for more pressing projects.
The interplay between structural connections and emerging information flow in the human brain remains an open research problem. A recent study observed global patterns of directional information flow in empirical data using the measure of transfer entropy. For higher frequency bands, the overall direction of information flow was from posterior to anterior regions whereas an anterior-to-posterior pattern was observed in lower frequency bands. In this study, we applied a simple Susceptible-Infected-Susceptible (SIS) epidemic spreading model on the human connectome with the aim to reveal the topological properties of the structural network that give rise to these global patterns. We found that direct structural connections induced higher transfer entropy between two brain regions and that transfer entropy decreased with increasing distance between nodes (in terms of hops in the structural network). Applying the SIS model, we were able to confirm the empirically observed opposite information flow patterns and posterior hubs in the structural network seem to play a dominant role in the network dynamics. For small time scales, when these hubs acted as strong receivers of information, the global pattern of information flow was in the posterior-to-anterior direction and in the opposite direction when they were strong senders. Our analysis suggests that these global patterns of directional information flow are the result of an unequal spatial distribution of the structural degree between posterior and anterior regions and their directions seem to be linked to different time scales of the spreading process.
One of America’s foremost philosophers offers a major new account of the origins of the conscious mind.
How did we come to have minds?
For centuries, this question has intrigued psychologists, physicists, poets, and philosophers, who have wondered how the human mind developed its unrivaled ability to create, imagine, and explain. Disciples of Darwin have long aspired to explain how consciousness, language, and culture could have appeared through natural selection, blazing promising trails that tend, however, to end in confusion and controversy. Even though our understanding of the inner workings of proteins, neurons, and DNA is deeper than ever before, the matter of how our minds came to be has largely remained a mystery.
That is now changing, says Daniel C. Dennett. In From Bacteria to Bach and Back, his most comprehensive exploration of evolutionary thinking yet, he builds on ideas from computer science and biology to show how a comprehending mind could in fact have arisen from a mindless process of natural selection. Part philosophical whodunit, part bold scientific conjecture, this landmark work enlarges themes that have sustained Dennett’s legendary career at the forefront of philosophical thought.
In his inimitable style―laced with wit and arresting thought experiments―Dennett explains that a crucial shift occurred when humans developed the ability to share memes, or ways of doing things not based in genetic instinct. Language, itself composed of memes, turbocharged this interplay. Competition among memes―a form of natural selection―produced thinking tools so well-designed that they gave us the power to design our own memes. The result, a mind that not only perceives and controls but can create and comprehend, was thus largely shaped by the process of cultural evolution.
An agenda-setting book for a new generation of philosophers, scientists, and thinkers, From Bacteria to Bach and Back will delight and entertain anyone eager to make sense of how the mind works and how it came about.
Epigenetics refers to information transmitted during cell division other than the DNA sequence per se, and it is the language that distinguishes stem cells from somatic cells, one organ from another, and even identical twins from each other. In contrast to the DNA sequence, the epigenome is relatively susceptible to modification by the environment as well as stochastic perturbations over time, adding to phenotypic diversity in the population. Despite its strong ties to the environment, epigenetics has never been well reconciled to evolutionary thinking, and in fact there is now strong evidence against the transmission of so-called “epi-alleles,” i.e. epigenetic modifications that pass through the germline.
However, genetic variants that regulate stochastic fluctuation of gene expression and phenotypes in the offspring appear to be transmitted as an epigenetic or even Lamarckian trait. Furthermore, even the normal process of cellular differentiation from a single cell to a complex organism is not understood well from a mathematical point of view. There is increasingly strong evidence that stem cells are highly heterogeneous and in fact stochasticity is necessary for pluripotency. This process appears to be tightly regulated through the epigenome in development. Moreover, in these biological contexts, “stochasticity” is hardly synonymous with “noise”, which often refers to variation which obscures a “true signal” (e.g., measurement error) or which is structural, as in physics (e.g., quantum noise). In contrast, “stochastic regulation” refers to purposeful, programmed variation; the fluctuations are random but there is no true signal to mask.
This workshop will serve as a forum for scientists and engineers with an interest in computational biology to explore the role of stochasticity in regulation, development and evolution, and its epigenetic basis. Just as thinking about stochasticity was transformative in physics and in some areas of biology, it promises to fundamentally transform modern genetics and help to explain phase transitions such as differentiation and cancer.
This workshop will include a poster session; a request for poster titles will be sent to registered participants in advance of the workshop.
Adam Arkin (Lawrence Berkeley Laboratory)
Gábor Balázsi (SUNY Stony Brook)
Domitilla Del Vecchio (Massachusetts Institute of Technology)
Michael Elowitz (California Institute of Technology)
Andrew Feinberg (Johns Hopkins University)
Don Geman (Johns Hopkins University)
Anita Göndör (Karolinska Institutet)
John Goutsias (Johns Hopkins University)
Garrett Jenkinson (Johns Hopkins University)
Andre Levchenko (Yale University)
Olgica Milenkovic (University of Illinois)
Johan Paulsson (Harvard University)
Leor Weinberger (University of California, San Francisco (UCSF))
The equations of gauge theory lie at the heart of our understanding of particle physics. The Standard Model, which describes the electromagnetic, weak, and strong forces, is based on the Yang-Mills equations. Starting with the work of Donaldson in the 1980s, gauge theory has also been successfully applied in other areas of pure mathematics, such as low dimensional topology, symplectic geometry, and algebraic geometry.
More recently, Witten proposed a gauge-theoretic interpretation of Khovanov homology, a knot invariant whose origins lie in representation theory. Khovanov homology is a “categorification” of the celebrated Jones polynomial, in the sense that its Euler characteristic recovers this polynomial. At the moment, Khovanov homology is only defined for knots in the three-sphere, but Witten’s proposal holds the promise of generalizations to other three-manifolds, and perhaps of producing new invariants of four-manifolds.
This workshop will bring together researchers from several different fields (theoretical physics, mathematical gauge theory, topology, analysis / PDE, representation theory, symplectic geometry, and algebraic geometry), and thus help facilitate connections between these areas. The common focus will be to understand Khovanov homology and related invariants through the lens of gauge theory.
This workshop will include a poster session; a request for posters will be sent to registered participants in advance of the workshop.
Edward Witten will be giving two public lectures as part of the Green Family Lecture series:
March 6, 2017 From Gauge Theory to Khovanov Homology Via Floer Theory
The goal of the lecture is to describe a gauge theory approach to Khovanov homology of knots, in particular, to motivate the relevant gauge theory equations in a way that does not require too much physics background. I will give a gauge theory perspective on the construction of singly-graded Khovanov homology by Abouzaid and Smith.
March 8, 2017 An Introduction to the SYK Model
The Sachdev-Ye model was originally a model of quantum spin liquids that was introduced in the mid-1990′s. In recent years, it has been reinterpreted by Kitaev as a model of quantum chaos and black holes. This lecture will be primarily a gentle introduction to the SYK model, though I will also describe a few more recent results.