That's right folks, this is not a science post. After 45 posts about information, quanta, intelligence, and what not, a how-to guide? Well I only have one blog, and I didn't know where to put this stuff, that I think could be helpful to others, because I've done this several times and I learned a bunch. So now you get to read about how to move your library of precious books from one house to another.
I like your method and did much the same myself this past September. The smallest “book box” one can find is certainly the key.
One thing you’re missing, at least in several of the photographs, that would help for both general shelf wear as well as for packing/moving is to have all of your dust jackets covered with book jacket covers. This will help protect your dust jackets from wear and tear and help increase their long term value, particularly for rarer first editions.
I notice that some of your collection likely already has these, à la the Heinlein, though it’s obvious in that case that a book seller likely jacketed it far too late to protect the pristine original. At least it’s protected from further future wear. If you think it’s worth the time and protection, it may be a worthwhile thing to do when you’re unpacking and reshelving them on the other end.
Brodart is one of the larger sellers of dust jacket covers and they make a huge variety of shapes, sizes, and types. I’ve found that their Advantage I covers are pretty solid and versatile for most of the book sizes you’ve got. Though fair warning: you can go down the rabbit hole and lose a few hours researching dust cover materials and archival types. In the end you want to look for something that covers the jacket, but doesn’t stick to it. This will allow you to replace the jacket cover with a new one if necessary without causing damage to the dust jacket itself.
our existence can succinctly be described as “information that can replicate itself,” the immediate follow-up question is, “Where did this information come from?”
from an information perspective, only the first step in life is difficult. The rest is just a matter of time.
Through decades of work by legions of scientists, we now know that the process of Darwinian evolution tends to lead to an increase in the information coded in genes. That this must happen on average is not difficult to see. Imagine I start out with a genome encoding n bits of information. In an evolutionary process, mutations occur on the many representatives of this information in a population. The mutations can change the amount of information, or they can leave the information unchanged. If the information changes, it can increase or decrease. But very different fates befall those two different changes. The mutation that caused a decrease in information will generally lead to lower fitness, as the information stored in our genes is used to build the organism and survive. If you know less than your competitors about how to do this, you are unlikely to thrive as well as they do. If, on the other hand, you mutate towards more information—meaning better prediction—you are likely to use that information to have an edge in survival.
There are some plants with huge amounts of DNA compared to their “peers”–perhaps these would be interesting test cases for potential experimentation of this?
The passing of the great physicist Stephen Hawking today at the age of 76 fills me with sadness for many different reasons. On the one hand, it was inspiring to witness that, seemingly, the power of will and intellect can hold such a serious illness at bay for so long. On the other hand, I am also sad that I never got to talk to him, and perhaps explain to him my take on his great body of work.
Information is a precise concept that can be defined mathematically, but its relationship to what we call "knowledge" is not always made clear. Furthermore, the concepts "entropy" and "information", while deeply related, are distinct and must be used with care, something that is not always achieved in the literature. In this elementary introduction, the concepts of entropy and information are laid out one by one, explained intuitively, but defined rigorously. I argue that a proper understanding of information in terms of prediction is key to a number of disciplines beyond engineering, such as physics and biology.
Comments: 19 pages, 2 figures. To appear in Philosophical Transaction of the Royal Society A
Subjects: Adaptation and Self-Organizing Systems (nlin.AO); Information Theory (cs.IT); Biological Physics (physics.bio-ph); Quantitative Methods (q-bio.QM)
Cite as:arXiv:1601.06176 [nlin.AO] (or arXiv:1601.06176v1 [nlin.AO] for this version)
As it was published, I had read Kevin Hartnett’s article and interview with Christoph Adami The Information Theory of Life in Quanta Magazine. I recently revisited it and read through the commentary and stumbled upon an interesting quote relating to the history of information in biology:
For those interested in reading more on this historical tidbit, I’ve dug up a copy of the primary Forsdyke reference which first appeared on arXiv (prior to its ultimate publication in History of Psychiatry [.pdf]):
Abstract: Today’s ‘theory of mind’ (ToM) concept is rooted in the distinction of nineteenth century philosopher William Clifford between ‘objects’ that can be directly perceived, and ‘ejects,’ such as the mind of another person, which are inferred from one’s subjective knowledge of one’s own mind. A founder, with Charles Darwin, of the discipline of comparative psychology, George Romanes considered the minds of animals as ejects, an idea that could be generalized to ‘society as eject’ and, ultimately, ‘the world as an eject’ – mind in the universe. Yet, Romanes and Clifford only vaguely connected mind with the abstraction we call ‘information,’ which needs ‘a vehicle of symbols’ – a material transporting medium. However, Samuel Butler was able to address, in informational terms depleted of theological trappings, both organic evolution and mind in the universe. This view harmonizes with insights arising from modern DNA research, the relative immortality of ‘selfish’ genes, and some startling recent developments in brain research.
Comments: Accepted for publication in History of Psychiatry. 31 pages including 3 footnotes. Based on a lecture given at Santa Clara University, February 28th 2014, at a Bannan Institute Symposium on ‘Science and Seeking: Rethinking the God Question in the Lab, Cosmos, and Classroom.’
The original arXiv article also referenced two lectures which are appended below:
http://www.youtube.com/watch?v=a3yNbTUCPd4
[Original Draft of this was written on December 14, 2015.]
References
[1]
D. Forsdyke R., “‘A vehicle of symbols and nothing more’. George Romanes, theory of mind, information, and Samuel Butler,” History of Psychiatry, vol. 26, no. 3, Aug. 2015 [Online]. Available: http://journals.sagepub.com/doi/abs/10.1177/0957154X14562755
The Information Theory of Life: The polymath Christoph Adami is investigating life’s origins by reimagining living things as self-perpetuating information strings.
[My comments posted to the original Facebook post follow below.]
I’m coming to this post a bit late as I’m playing a bit of catch up, but agree with it wholeheartedly.
In particular, applications to molecular biology and medicine are really beginning to come to a heavy boil in just the past five years. This particular year is the progenitor of what appears to be the biggest renaissance for the application of information theory to the area of biology since Hubert Yockey, Henry Quastler, and Robert L. Platzman’s “Symposium on Information Theory in Biology at Gatlinburg, Tennessee” in 1956.
Upcoming/recent conferences/workshops on information theory in biology include:
I’ll note in passing, for those interested, that Claude Shannon’s infamous master’s thesis at MIT (in which he applied Boolean Algebra to electric circuits allowing the digital revolution to occur) and his subsequent “The Theory of Mathematical Communication” were so revolutionary, nearly everyone forgets his MIT Ph.D. Thesis “An Algebra for Theoretical Genetics” which presaged the areas of cybernetics and the current applications of information theory to microbiology and are probably as seminal as Sir R.A Fisher’s applications of statistics to science in general and biology in particular.
For those commenting on the post who were interested in a layman’s introduction to information theory, I recommend John Robinson Pierce’s An Introduction to Information Theory: Symbols, Signals and Noise (Dover has a very inexpensive edition.) After this, one should take a look at Claude Shannon’s original paper. (The MIT Press printing includes some excellent overview by Warren Weaver along with the paper itself.) The mathematics in the paper really aren’t too technical, and most of it should be comprehensible by most advanced high school students.
For those that don’t understand the concept of entropy, I HIGHLY recommend Arieh Ben-Naim’s book Entropy Demystified The Second Law Reduced to Plain Common Sense with Seven Simulated Games. He really does tear the concept down into its most basic form in a way I haven’t seen others come remotely close to and which even my mother can comprehend (with no mathematics at all). (I recommend this presentation to even those with Ph.D.’s in physics because it is so truly fundamental.)
For the more advanced mathematicians, physicists, and engineers Arieh Ben-Naim does a truly spectacular job of extending ET Jaynes’ work on information theory and statistical mechanics and comes up with a more coherent mathematical theory to conjoin the entropy of physics/statistical mechanics with that of Shannon’s information theory in A Farewell to Entropy: Statistical Thermodynamics Based on Information.
For the advanced readers/researchers interested in more at the intersection of information theory and biology, I’ll also mention that I maintain a list of references, books, and journal articles in a Mendeley group entitled “ITBio: Information Theory, Microbiology, Evolution, and Complexity.”
How do we search for alien life if it's nothing like the life that we know? Christoph Adami shows how he uses his research into artificial life -- self-replicating computer programs -- to find a signature, a "biomarker," that is free of our preconceptions of what life is.
Adami’s work is along similar lines to some of my own research. This short video gives an intriguing look into some of the basics of how to define life so that one can recognize it when one sees it.
We describe the evolution of macromolecules as an information transmission process and apply tools from Shannon information theory to it. This allows us to isolate three independent, competing selective pressures that we term compression, transmission, and neutrality selection. The first two affect genome length: the pressure to conserve resources by compressing the code, and the pressure to acquire additional information that improves the channel, increasing the rate of information transmission into each offspring. Noisy transmission channels (replication with mutations) gives rise to a third pressure that acts on the actual encoding of information; it maximizes the fraction of mutations that are neutral with respect to the phenotype. This neutrality selection has important implications for the evolution of evolvability. We demonstrate each selective pressure in experiments with digital organisms.
To be published in J. theor. Biology 222 (2003) 477-483
DOI: 10.1016/S0022-5193(03)00062-6