Over the next few days, I’ll be maintaining a Storify story covering information related to and coming out of the Information Theory and Entropy Workshop being sponsored by NIMBios at the Unviersity of Tennessee, Knoxville.
For those in attendance or participating by watching the live streaming video (or even watching the video after-the-fact), please feel free to use the official hashtag #entropyWS, and I’ll do my best to include your tweets, posts, and material into the story stream for future reference.
For journal articles and papers mentioned in/at the workshop, I encourage everyone to join the Mendeley.com group ITBio: Information Theory, Microbiology, Evolution, and Complexity and add them to the group’s list of papers. Think of it as a collaborative online journal club of sorts.
Those participating in the workshop are also encouraged to take a look at a growing collection of researchers and materials I maintain here. If you have materials or resources you’d like to contribute to the list, please send me an email or include them via the suggestions/submission form or include them in the comments section below.
Dr. Mike Miller, who had previously announced a two quarter sequence of classes on Lie Groups at UCLA, has just opened up registration for the second course in the series. His courses are always clear, entertaining, and invigorating, and I highly recommend them to anyone who is interested in math, science, or engineering.
Philosophy is written in this grand book, the universe which stands continually open to our gaze. But the book cannot be understood unless one first learns to comprehend the language and read the letters in which it is composed. It is written in the language of mathematics, and its characters are triangles, circles and other geometric figures without which it is humanly impossible to understand a single word of it; without these, one wanders about in a dark labyrinth.
Galileo Galilee (1564–1642) in Il saggiatore (The assayer)
Prior to the first part of the course, I’d written some thoughts about the timbre and tempo of his lecture style and philosophy and commend those interested to take a peek. I also mentioned some additional resources for the course there as well. For those who missed the first portion, I’m happy to help fill you in and share some of my notes if necessary. The recommended minimum prerequisites for this class are linear algebra and some calculus.
Introduction to Lie Groups and Lie Algebras (Part 2)
Math X 450.7 / 3.00 units / Reg. # 251580W
Professor: Michael Miller, Ph.D.
Start Date: January 13, 2015
Location: UCLA, 5137 Math Sciences Building
Tuesday, 7-10pm
January 13 – March 24
11 meetings total Class will not meet on one Tuesday to be annouced.
A Lie group is a differentiable manifold that is also a group for which the product and inverse maps are differentiable. A Lie algebra is a vector space endowed with a binary operation that is bilinear, alternating, and satisfies the so-called Jacobi identity. This course is the second in a 2-quarter sequence that offers an introductory survey of Lie groups, their associated Lie algebras, and their representations. Its focus is split between continuing last quarter’s study of matrix Lie groups and their representations and reconciling this theory with that for the more general manifold setting. Topics to be discussed include the Weyl group, complete reducibility, semisimple Lie algebras, root systems, and Cartan subalgebras. This is an advanced course, requiring a solid understanding of linear algebra, basic analysis, and, ideally, the material from the previous quarter.Internet access required to retrieve course materials.
Editor’s note: On 12/12/17 Storify announced they would be shutting down. As a result, I’m changing the embedded version of the original data served by Storify for an HTML copy which can be found below:
I’m giving a short 30-minute talk at a workshop on Biological and Bio-Inspired Information Theory at the Banff International Research Institute. I’ll say more about the workshop later, but here’s my talk: * Biodiversity, entropy and thermodynamics. Most of the people at this workshop study neurobiology and cell signalling, not evolutionary game theory or…
I’m having a great time at a workshop on Biological and Bio-Inspired Information Theory in Banff, Canada. You can see videos of the talks online. There have been lots of good talks so far, but this one really blew my mind: * Naftali Tishby, Sensing and acting under information constraints—a principled approach to biology and…
John Harte is an ecologist who uses maximum entropy methods to predict the distribution, abundance and energy usage of species. Marc Harper uses information theory in bioinformatics and evolutionary game theory. Harper, Harte and I are organizing a workshop on entropy and information in biological systems, and I’m really excited about it!
John Harte, Marc Harper and I are running a workshop! Now you can apply here to attend: * Information and entropy in biological systems, National Institute for Mathematical and Biological Synthesis, Knoxville Tennesee, Wednesday-Friday, 8-10 April 2015. Click the link, read the stuff and scroll down to “CLICK HERE” to apply.
There will be a 5-day workshop on Biological and Bio-Inspired Information Theory at BIRS from Sunday the 26th to Friday the 31st of October, 2014. It’s being organized by * Toby Berger (University of Virginia) * Andrew Eckford (York University) * Peter Thomas (Case Western Reserve University) BIRS is the Banff International Research Station,…
How does it feel to (co-)write a book and hold the finished product in your hands? About like this: Many, many thanks to my excellent co-authors, Tadashi Nakano and Tokuko Haraguchi, for their hard work; thanks to Cambridge for accepting this project and managing it well; and thanks to Satoshi Hiyama for writing a nice blurb.
You may have seen our PLOS ONE paper about tabletop molecular communication, which received loads of media coverage. One of the goals of this paper was to show that anyone can do experiments in molecular communication, without any wet labs or expensive apparatus.
INFORMATION THEORY is the new central discipline. This graph was from 20y ago in the seminal book Cover and Thomas, as the field was starting to be defined. Now Information Theory has been expanded to swallow even more fields.
Born in, of all disciplines, Electrical Engineering, the field has progressively infiltrating probability theory, computer science, statistical physics, data science, gambling theory, ruin problems, complexity, even how one deals with knowledge, epistemology. It defines noise/signal, order/disorder, etc. It studies cellular automata. You can use it in theology (FREE WILL & algorithmic complexity). As I said, it is the MOTHER discipline.
I am certain much of Medicine will naturally grow to be a subset of it, both operationally, and in studying how the human body works: the latter is an information machine. Same with linguistics. Same with political “science”, same with… everything.
I am saying this because I figured out what the long 5th volume of the INCERTO will be. Cannot say now with any precision but it has to do with a variant of entropy as the core natural generator of Antifragility.
[Revised to explain that it is not *replacing* other disciplines, just infiltrating them as the point was initially misunderstood…]
[My comments posted to the original Facebook post follow below.]
I’m coming to this post a bit late as I’m playing a bit of catch up, but agree with it wholeheartedly.
In particular, applications to molecular biology and medicine are really beginning to come to a heavy boil in just the past five years. This particular year is the progenitor of what appears to be the biggest renaissance for the application of information theory to the area of biology since Hubert Yockey, Henry Quastler, and Robert L. Platzman’s “Symposium on Information Theory in Biology at Gatlinburg, Tennessee” in 1956.
Upcoming/recent conferences/workshops on information theory in biology include:
I’ll note in passing, for those interested, that Claude Shannon’s infamous master’s thesis at MIT (in which he applied Boolean Algebra to electric circuits allowing the digital revolution to occur) and his subsequent “The Theory of Mathematical Communication” were so revolutionary, nearly everyone forgets his MIT Ph.D. Thesis “An Algebra for Theoretical Genetics” which presaged the areas of cybernetics and the current applications of information theory to microbiology and are probably as seminal as Sir R.A Fisher’s applications of statistics to science in general and biology in particular.
For those commenting on the post who were interested in a layman’s introduction to information theory, I recommend John Robinson Pierce’s An Introduction to Information Theory: Symbols, Signals and Noise (Dover has a very inexpensive edition.) After this, one should take a look at Claude Shannon’s original paper. (The MIT Press printing includes some excellent overview by Warren Weaver along with the paper itself.) The mathematics in the paper really aren’t too technical, and most of it should be comprehensible by most advanced high school students.
For those that don’t understand the concept of entropy, I HIGHLY recommend Arieh Ben-Naim’s book Entropy Demystified The Second Law Reduced to Plain Common Sense with Seven Simulated Games. He really does tear the concept down into its most basic form in a way I haven’t seen others come remotely close to and which even my mother can comprehend (with no mathematics at all). (I recommend this presentation to even those with Ph.D.’s in physics because it is so truly fundamental.)
For the more advanced mathematicians, physicists, and engineers Arieh Ben-Naim does a truly spectacular job of extending ET Jaynes’ work on information theory and statistical mechanics and comes up with a more coherent mathematical theory to conjoin the entropy of physics/statistical mechanics with that of Shannon’s information theory in A Farewell to Entropy: Statistical Thermodynamics Based on Information.
For the advanced readers/researchers interested in more at the intersection of information theory and biology, I’ll also mention that I maintain a list of references, books, and journal articles in a Mendeley group entitled “ITBio: Information Theory, Microbiology, Evolution, and Complexity.”
In recent years, ideas such as “life is information processing” or “information holds the key to understanding life” have become more common. However, how can information, or more formally Information Theory, increase our understanding of life, or life-like systems?
Information Theory not only has a profound mathematical basis, but also typically provides an intuitive understanding of processes, such as learning, behavior and evolution terms of information processing.
In this special issue, we are interested in both:
the information-theoretic formalization and quantification of different aspects of life, such as driving forces of learning and behavior generation, information flows between neurons, swarm members and social agents, and information theoretic aspects of evolution and adaptation, and
the simulation and creation of life-like systems with previously identified principles and incentives.
Topics with relation to artificial and natural systems:
information theoretic intrinsic motivations
information theoretic quantification of behavior
information theoretic guidance of artificial evolution
information theoretic guidance of self-organization
information theoretic driving forces behind learning
information theoretic driving forces behind behavior
information theory in swarms
information theory in social behavior
information theory in evolution
information theory in the brain
information theory in system-environment distinction
information theory in the perception action loop
information theoretic definitions of life
Submission
Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. Papers will be published continuously (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.
Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are refereed through a peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed Open Access monthly journal published by MDPI.
Deadline for manuscript submissions: 28 February 2015
Special Issue Editors
Guest Editor Dr. Christoph Salge
Adaptive Systems Research Group,University of Hertfordshire, College Lane, AL10 9AB Hatfield, UK
Website: http://homepages.stca.herts.ac.uk/~cs08abi
E-Mail: c.salge@herts.ac.uk
Phone: +44 1707 28 4490 Interests: Intrinsic Motivation (Empowerment); Self-Organization; Guided Self-Organization; Information-Theoretic Incentives for Social Interaction; Information-Theoretic Incentives for Swarms; Information Theory and Computer Game AI
Guest Editor Dr. Georg Martius
Cognition and Neurosciences, Max Planck Institute for Mathematics in the Sciences Inselstrasse 22, 04103 Leipzig, Germany
Website: http://www.mis.mpg.de/jjost/members/georg-martius.html
E-Mail: martius@mis.mpg.de
Phone: +49 341 9959 545 Interests: Autonomous Robots; Self-Organization; Guided Self-Organization; Information Theory; Dynamical Systems; Machine Learning; Neuroscience of Learning; Optimal Control
Guest Editor Dr. Keyan Ghazi-Zahedi
Information Theory of Cognitive Systems, Max Planck Institute for Mathematics in the Sciences Inselstrasse 22, 04103 Leipzig, Germany
Website: http://personal-homepages.mis.mpg.de/zahedi
E-Mail: zahedi@mis.mpg.de
Phone: +49 341 9959 535 Interests: Embodied Artificial Intelligence; Information Theory of the Sensorimotor Loop; Dynamical Systems; Cybernetics; Self-organisation; Synaptic plasticity; Evolutionary Robotics
Guest Editor Dr. Daniel Polani
Department of Computer Science, University of Hertfordshire, Hatfield AL10 9AB, UK
Website: http://homepages.feis.herts.ac.uk/~comqdp1/
E-Mail: d.polani@herts.ac.uk Interests: artificial intelligence; artificial life; information theory for intelligent information processing; sensor Evolution; collective and multiagent systems
As many may know or have already heard, Dr. Mike Miller, a retired mathematician from RAND and long-time math professor at UCLA, is offering a course on Introduction to Lie Groups and Lie Algebras this fall through UCLA Extension. Whether you’re a professional mathematician, engineer, physicist, physician, or even a hobbyist interested in mathematics you’ll be sure to get something interesting out of this course, not to mention the camaraderie of 20-30 other “regulars” with widely varying backgrounds (actors to surgeons and evolutionary theorists to engineers) who’ve been taking almost everything Mike has offered over the years (and yes, he’s THAT good — we’re sure you’ll be addicted too.)
“Beginners” Welcome!
Even if it’s been years since you last took Calculus or Linear Algebra, Mike (and the rest of the class) will help you get quickly back up to speed to delve into what is often otherwise a very deep subject. If you’re interested in advanced physics, quantum mechanics, quantum information or string theory, this is one of the topics that is de rigueur for delving in deeply and being able to understand them better. The topic is also one near and dear to the hearts of those in robotics, graphics, 3-D modelling, gaming, and areas utilizing multi-dimensional rotations. And naturally, it’s simply a beautiful and elegant subject for those who have no need to apply it to anything, but who just want to meander their way through higher mathematics for the fun of it (this will comprise the largest majority of the class by the way.)
Whether you’ve been away from serious math for decades or use it every day or even if you’ve never gone past Calculus or Linear Algebra, this is bound to be the most entertaining thing you can do with your Tuesday nights in the fall. If you’re not sure what you’re getting into (or are scared a bit by the course description), I highly encourage to come and join us for at least the first class before you pass up on the opportunity. I’ll mention that the greater majority of new students to Mike’s classes join the ever-growing group of regulars who take almost everything he teaches subsequently. (For the reticent, I’ll mention that one of the first courses I took from Mike was Algebraic Topology which generally requires a few semesters of Abstract Algebra and a semester of Topology as prerequisites. I’d taken neither of these prerequisites, but due to Mike’s excellent lecture style and desire to make everything comprehensible, I was able to do exceedingly well in the course.) I’m happy to chat with those who may be reticent. Also keep in mind that you can register to take the class for a grade, pass/fail, or even no grade at all to suit your needs/lifestyle.
My classes have the full spectrum of students from the most serious to the hobbyist to those who are in it for the entertainment and ‘just enjoy watching it all go by.’
Mike Miller, Ph.D.
As a group, some of us have a collection of a few dozen texts in the area which we’re happy to loan out as well. In addition to the one recommended text (Mike always gives such comprehensive notes that any text for his classes is purely supplemental at best), several of us have also found some good similar texts:
Given the breadth and diversity of the backgrounds of students in the class, I’m sure Mike will spend some reasonable time at the beginning [or later in the class, as necessary] doing a quick overview of some linear algebra and calculus related topics that will be needed later in the quarter(s).
Further information on the class and a link to register can be found below. If you know of others who might be interested in this, please feel free to forward it along – the more the merrier.
I hope to see you all soon.
Introduction to Lie Groups and Lie Algebras
MATH X 450.6 / 3.00 units / Reg. # 249254W
Professor: Michael Miller, Ph.D.
Start Date: 9/30/2014
Location UCLA: 5137 Math Sciences Building
Tuesday, 7-10pm
September 30 – December 16, 2014
11 meetings total (no mtg 11/11)
Register here: https://www.uclaextension.edu/Pages/Course.aspx?reg=249254
Course Description
A Lie group is a differentiable manifold that is also a group for which the product and inverse maps are differentiable. A Lie algebra is a vector space endowed with a binary operation that is bilinear, alternating, and satisfies the so-called Jacobi identity. This course, the first in a 2-quarter sequence, is an introductory survey of Lie groups, their associated Lie algebras, and their representations. This first quarter will focus on the special case of matrix Lie groups–including general linear, special linear, orthogonal, unitary, and symplectic. The second quarter will generalize the theory developed to the case of arbitrary Lie groups. Topics to be discussed include compactness and connectedness, homomorphisms and isomorphisms, exponential mappings, the Baker-Campbell-Hausdorff formula, covering groups, and the Weyl group. This is an advanced course, requiring a solid understanding of linear algebra and basic analysis.
I rarely, if ever, reblog anything here, but this particular post from John Baez’s blog Azimuth is so on-topic, that attempting to embellish it seems silly.
Entropy and Information in Biological Systems (Part 2)
John Harte, Marc Harper and I are running a workshop! Now you can apply here to attend:
Click the link, read the stuff and scroll down to “CLICK HERE” to apply. The deadline is 12 November 2014.
Financial support for travel, meals, and lodging is available for workshop attendees who need it. We will choose among the applicants and invite 10-15 of them.
The idea
Information theory and entropy methods are becoming powerful tools in biology, from the level of individual cells, to whole ecosystems, to experimental design, model-building, and the measurement of biodiversity. The aim of this investigative workshop is to synthesize different ways of applying these concepts to help systematize and unify work in biological systems. Early attempts at “grand syntheses” often misfired, but applications of information theory and entropy to specific highly focused topics in biology have been increasingly successful. In ecology, entropy maximization methods have proven successful in predicting the distribution and abundance of species. Entropy is also widely used as a measure of biodiversity. Work on the role of information in game theory has shed new light on evolution. As a population evolves, it can be seen as gaining information about its environment. The principle of maximum entropy production has emerged as a fascinating yet controversial approach to predicting the behavior of biological systems, from individual organisms to whole ecosystems. This investigative workshop will bring together top researchers from these diverse fields to share insights and methods and address some long-standing conceptual problems.
So, here are the goals of our workshop:
To study the validity of the principle of Maximum Entropy Production (MEP), which states that biological systems – and indeed all open, non-equilibrium systems – act to produce entropy at the maximum rate.
To familiarize all the participants with applications to ecology of the MaxEnt method: choosing the probabilistic hypothesis with the highest entropy subject to the constraints of our data. We will compare MaxEnt with competing approaches and examine whether MaxEnt provides a sufficient justification for the principle of MEP.
To clarify relations between known characterizations of entropy, the use of entropy as a measure of biodiversity, and the use of MaxEnt methods in ecology.
To develop the concept of evolutionary games as “learning” processes in which information is gained over time.
To study the interplay between information theory and the thermodynamics of individual cells and organelles.
If you’ve got colleagues who might be interested in this, please let them know. You can download a PDF suitable for printing and putting on a bulletin board by clicking on this:
On Friday, I had an excellent and stimulating conversation with Arieh Ben-Naim about his recent writing and work, and he mentioned in passing that he had been invited to a conference relating to entropy and biology in Vienna. A quick websearch found it quickly, and not having heard about it myself yet, I thought I’d pass it along to others who are regular readers and interested in the area.
The workshop on “Entropy in Biomolecular Systems” is being hosted by the Centre Européen de Calcul Atomique et Moléculaire (CECAM)
Location: DACAM, Max F. Perutz Laboratories, University of Vienna, Dr. Bohrgasse 9, A-1030, Vienna, Austria
Dates: May 14, 2014 to May 17, 2014
I’ll note that the registration deadline is on April 21 with a payment deadline of April 30, so check in quickly if you haven’t already.
The summary from the workshop website states:
This workshop brings together the world’s experts to address the challenges of determining the entropy of biomolecular systems, either by experiment or computer simulation. Entropy is one the main driving forces for any biological process such as binding, folding, partitioning and reacting. Our deficient understandng of entropy, however, means that such important processes remain controversial and only partially understood. Contributions of water, ions, cofactors, and biomolecular flexibility are actively examined but yet to be resolved. The state-of-the-art of each entropy method will be presented and explained, highlighting its capabilities and deficiencies. This will be followed by intensive discussion on the main areas that need improving, leading suitable actions and collaborations to address the main biological and industrial questions.
Further details on the workshop can be found on the CECAM website.
As always, details on other upcoming workshops and conferences relating to information theory and biology can be found on our ITBio Conferences/Workshops page.
To put it saucily: information theory is something like the logarithm of probability theory. In early modern times the logarithm simplified multiplication into addition which was more accessible to calculation. Today, information theory transforms many quantities of probability theory into quantities which allow simpler bookkeeping.
More seriously, information theory is one of the most universal concepts with applications in computer science, mathematics, physics, biology, chemistry and other fields. It allows a lucid and transparent analysis of many systems and provides a framework to study and compare seemingly different systems using the same language and notions.
Dr. Daniel Polani, reader in Artificial Life, University of Hertfordshire
in “Research Questions”
Not only a great quote, but an interesting way to view the subjects.
I have known more people whose lives have been ruined by getting a Ph.D. in physics than by drugs.
Jonathan I. Katz, Professor of Physics, Washington University, St. Louis, Mo.
in “Don’t Become a Scientist!”
In the essay, Dr. Katz provides a bevy of solid reasons why one shouldn’t become a researcher. I highly recommend everyone read it and then carefully consider how we can turn these problems around.
A good many times I have been present at gatherings of people who, by the standards of the traditional culture, are thought highly educated and who have with considerable gusto been expressing their incredulity of scientists. Once or twice I have been provoked and have asked the company how many of them could describe the Second Law of Thermodynamics. The response was cold: it was also negative. Yet I was asking something which is the scientific equivalent of: Have you read a work of Shakespeare’s?
I now believe that if I had asked an even simpler question — such as, What do you mean by mass, or acceleration, which is the scientific equivalent of saying, Can you read? — not more than one in ten of the highly educated would have felt that I was speaking the same language. So the great edifice of modern physics goes up, and the majority of the cleverest people in the western world have about as much insight into it as their neolithic ancestors would have had.
Complexity: A Guided Tour
Melanie Mitchell
Popular Science
Oxford University Press
May 28, 2009
Hardcover
366
This book provides an intimate, highly readable tour of the sciences of complexity, which seek to explain how large-scale complex, organized, and adaptive behavior can emerge from simple interactions among myriad individuals. The author, a leading complex systems scientist, describes the history of ideas, current research, and future prospects in this vital scientific effort.
This is handily one of the best, most interesting, and (to me at least) the most useful popularly written science books I’ve yet to come across. Most popular science books usually bore me to tears and end up being only pedantic for their historical backgrounds, but this one is very succinct with some interesting viewpoints (some of which I agree with and some of which my intuition says are terribly wrong) on the overall structure presented.
For those interested in a general and easily readable high-level overview of some of the areas of research I’ve been interested in (information theory, thermodynamics, entropy, microbiology, evolution, genetics, along with computation, dynamics, chaos, complexity, genetic algorithms, cellular automata, etc.) for the past two decades, this is really a lovely and thought-provoking book.
At the start I was disappointed that there were almost no equations in the book to speak of – and perhaps this is why I had purchased it when it came out and it’s subsequently been sitting on my shelf for so long. The other factor that prevented me from reading it was the depth and breadth of other more technical material I’ve read which covers the majority of topics in the book. I ultimately found myself not minding so much that there weren’t any/many supporting equations aside from a few hidden in the notes at the end of the text in most part because Dr. Mitchell does a fantastic job of pointing out some great subtleties within the various subjects which comprise the broader concept of complexity which one generally would take several years to come to on one’s own and at far greater expense of their time. Here she provides a much stronger picture of the overall subjects covered and this far outweighed the lack of specificity. I honestly wished I had read the book when it was released and it may have helped me to me more specific in my own research. Fortunately she does bring up several areas I will need to delve more deeply into and raised several questions which will significantly inform my future work.
In general, I wish there were more references I hadn’t read or been aware of yet, but towards the end there were a handful of topics relating to fractals, chaos, computer science, and cellular automata which I have been either ignorant of or which are further down my reading lists and may need to move closer to the top. I look forward to delving into many of these shortly. As a simple example, I’ve seen Zipf’s law separately from the perspectives of information theory, linguistics, and even evolution, but this is the first time I’ve seen it related to power laws and fractals.
I definitely appreciated the fact that Dr. Mitchell took the time to point out her own personal feelings on several topics and more so that she explicitly pointed them out as her own gut instincts instead of mentioning them passingly as if they were provable science which is what far too many other authors would have likely done. There are many viewpoints she takes which I certainly don’t agree with, but I suspect that it’s because I’m coming at things from the viewpoint of an electrical engineer with a stronger background in information theory and microbiology while hers is closer to that of computer science. She does mention that her undergraduate background was in mathematics, but I’m curious what areas she specifically studied to have a better understanding of her specific viewpoints.
Her final chapter looking at some of the pros and cons of the topic(s) was very welcome, particularly in light of previous philosophic attempts like cybernetics and general systems theory which I (also) think failed because of their lack of specificity. These caveats certainly help to place the scientific philosophy of complexity into a much larger context. I will generally heartily agree with her viewpoint (and that of others) that there needs to be a more rigorous mathematical theory underpinning the overall effort. I’m sure we’re all wondering “Where is our Newton?” or to use her clever aphorism that we’re “waiting for Carnot.” (Sounds like it should be a Tom Stoppard play title, doesn’t it?)
I might question her brief inclusion of her own Ph.D. thesis work in the text, but it did actually provide a nice specific and self-contained example within the broader context and also helped to tie several of the chapters together.
My one slight criticism of the work would be the lack of better footnoting within the text. Though many feel that footnote numbers within the text or inclusion at the bottom of the pages detracts from the “flow” of the work, I found myself wishing that she had done so here, particularly as I’m one of the few who actually cares about the footnotes and wants to know the specific references as I read. I hope that Oxford eventually publishes an e-book version that includes cross-linked footnotes in the future for the benefit of others.
I can heartily recommend this book to any fan of science, but I would specifically recommend it to any undergraduate science or engineering major who is unsure of what they’d specifically like to study and might need some interesting areas to take a look at. I will mention that one of the tough parts of the concept of complexity is that it is so broad and general that it encompasses over a dozen other fields of study each of which one could get a Ph.D. in without completely knowing the full depth of just one of them much less the full depth of all of them. The book is so well written that I’d even recommend it to senior researchers in any of the above mentioned fields as it is certainly sure to provide not only some excellent overview history of each, but it is sure to bring up questions and thoughts that they’ll want to include in their future researches in their own specific sub-areas of expertise.
As an electrical engineer (in the subfields of information theory and molecular biology), I have to say that I’m very intrigued by the articles (1, 2) that Marc Parry has written for the Chronicle in the past few weeks on the subjects of quantitative history, cliometrics/cliodynamics, or what I might term Big History (following the tradition of David Christian; I was initially turned onto it by a Chronicle article). I have lately coincidentally been reading Steven Pinker’s book “The Better Angels of Our Nature” as well as Daniel Kanheman’s “Thinking, Fast and Slow”. (I’ll also mention that I’m a general fan of the work of Jared Diamond and Matt Ridley who impinge on these topics as well.)
I’m sure that all of these researchers are onto something in terms of trying to better quantify our historical perspectives in using science and applying it to history. I think the process might be likened to the ways in which methods of computed tomography, P.E.T., S.P.E.C.T, et al have been applied to the areas of psychology since the late 70’s to create the field of cognitive neuropsychology which has now grown much more closely to the more concrete areas of neurophysiology within biology, chemistry, and medicine.
I can see both sides of the “controversy” which is mentioned in the articles as well as in the comments in all of the articles, but I have a very visceral gut feeling that they can be ironed out over time. I say this as areas like behavioral economics which have grown out of the psychology work mentioned in Kahneman’s book become more concrete. The data available for application with relation to history will be much more useful as people’s psychological interactions with their surroundings are better understood. People in general are exceptionally poor at extrapolating statistical knowledge of the world around them and putting it into the best use. For example, although one can make an accurate calculation of the time-value of money, most people who know it won’t use it to determine the best way of taking a large lottery payout (either a lump sum or paid out over time), and this doesn’t even take into consideration the phenomenal odds against even playing the lottery in the first place. Kahneman’s system 1 and system 2 structures in conjunction with more historical data and analysis of the two in conjunction may be a far better method than either that of historians’ previous attempts or that of the quantitative method separately. Put into mathematical terms, it’s much more likely the case that human interactions follow a smaller local min-max curve/equation on a limited horizon, but do not necessarily follow the global maxima and minima that are currently being viewed at the larger scales of big history. We’ll need to do a better job of sifting through the data and coming up with a better interpretation of it on the correct historical scales for the problem at hand.
Perhaps, by analogy, we might look at this disconnect between the two camps as the same type of disconnect seen in the areas of Newtonian and quantum physics. They’re both interlinked somehow and do a generally good job of providing accurate viewpoints and predictions of their own sub-areas, but haven’t been put together coherently into one larger catch-all theory encompassing both. Without the encouragement of work in the quantitative areas of history, we’ll certainly be at a great disadvantage.