This is a fantastic book which, for the majority of people, I’d give a five star review. For my own purposes, however, I was expecting far more on the theoretical side of information theory and statistical mechanics as applied to microbiology that it didn’t live up to, so I’m giving it three stars from a purely personal perspective.
I do wish that someone had placed it in my hands and forced me to read it when I was a freshman in college entering the study of biomedical and electrical engineering. It is far more an impressive book at this level and for those in the general public who are interested in the general history of science and philosophy of the topics. The general reader may be somewhat scared by a small amount of mathematics in chapter 4, but there is really no loss of continuity by skimming through most of it. For those looking for a bit more rigor, Avery provides some additional details in appendix A, but for the specialist, the presentation is heavily lacking.
The book opens with a facile but acceptable overview of the history of the development for the theory of evolution whereas most other texts would simply begin with Darwin’s work and completely skip the important philosophical and scientific contributions of Aristotle, Averroes, Condorcet, Linnaeus, Erasmus Darwin, Lamarck, or the debates between Cuvier and St. Hilaire.
For me, the meat of the book was chapters 3-5 and appendix A which collectively covered molecular biology, evolution, statistical mechanics, and a bit of information theory, albeit from a very big picture point of view. Unfortunately the rigor of the presentation and the underlying mathematics were skimmed over all too quickly to accomplish what I had hoped to gain from the text. On the other hand, the individual sections of “suggestions for further reading” throughout the book seem well researched and offer an acceptable launching pad for delving into topics in places where they may be covered more thoroughly.
The final several chapters become a bit more of an overview of philosophy surrounding cultural evolution and information technology which are much better covered and discussed in James Gleick’s recent book The Information.
Overall, Avery has a well laid out outline of the broad array of subjects and covers it all fairly well in an easy to read and engaging style.
We’re just past mid-summer. This means that most professors have just put in their book orders with bookstores for their fall courses if they haven’t already done so months ago. Enterprising students are either looking online for what those fall textbooks will be, or contacting their professors for booklists so they can begin pre-reading material.
The Chronicle of Higher Education’s ProfHacker Blog recently published an article by Erin Templeton entitled “Read Ahead to Get Ahead? Not so Fast” in which she stated a philosophy in which reading ahead might not be such a good idea. I certainly understand the point of view of withholding a reading list for the reasons mentioned particularly for fiction classes, though I would personally tend to use her spectacular advice given in the last paragraph. Unfortunately, for the broader topic of textbooks, I think it’s disingenuous to take such a narrow view as fiction (and similar) classes are a small segment of the market. If nothing, the headline certainly makes for excellent link-bait as the blogosphere would define it.
From the broader perspective, it is generally a good idea to get copies of the reading list early and get a jump start on the material. But more than this, there is actually a better way of approaching the idea of textbooks, particularly for the dedicated student.
It’s more than once been my experience that the professor chooses the worst text available for a particular course – perhaps because she doesn’t care, because it was the cheapest, because she liked the textbook salesperson, because it’s the “standard” text used by everyone in the field despite its obvious flaws, because it’s the legacy text prescribed by the department, because it’s the text she used in graduate school, because she wrote it, or simply because the deadline for ordering for the bookstore was looming and wanted the task out of the way. Maybe she actually put in a great deal of work and research choosing the book six years ago but hasn’t compared any texts since then and there are two new books on the market and her previous second choice has been significantly updated and all of them may be better choices now.
Historically, it used to be the case that the first job the student faced was to do some research to choose their own textbook! Sadly — especially as most courses have dozens of excellent potential texts available for use — this concept has long since disappeared. How can this travesty be remedied?
The first step is realizing that when the course guide says that a book is “required” it really means that it’s recommended. Occasionally, for some courses or in-class work (think literature classes where everyone is reading the same text because absolutely no alternates are available), actually having the required text may be very beneficial, but more often than not, not having the particular text really isn’t a big issue. One can always borrow a classmate’s text for a moment or consult a copy from a local library or from the library reserves as most colleges put their required textbooks aside for just such a use.
When taking a course myself, I’ll visit the library, local bookstores, and even browse online and pull every text I can get my hands on as well as some supplemental texts about a particular topic. I’ll cull through recommended reading lists for similar courses at other universities. Then I’ll spend a day or two browsing through them to judge their general level of sophistication, the soundness of their didactic presentation, the amount of information they contain, what other texts they cite, are there excessive typos, are they well edited, do the graphs, charts, or diagrams assist in learning, find out if the third edition is really better than the second to justify the eighty dollar price differential, and a variety of other criteria depending on the text, the class, and the level of difficulty. In short, I do what I would hope any other professor would do herself, as one can’t always trust that they’ve done their own homework.
Naturally I’m not able to do this research from the same perspective as the professor, and this is something that I take into account when choosing my own textbook. More often than not many professors are thrilled to engage in a discussion about the available textbooks and what they like and dislike about each and which alternates might be more suitable for individual students depending on what they hope to get out of the class. But doing this research certainly gives me a much broader perspective on what I’m about to learn: what are the general topics? what are the differing perspectives? what do alternate presentations look like? what might I be missing? how do the tables of contents differ? how has the level of the material progressed in the past decade or the last century? Finally I choose my own textbook for personal learning throughout the semester. I may occasionally supplement it with those I’ve researched or the one recommended (aka “required”) by the professor or may read library reserve copies or take the requisite homework problems/questions from them. I find that in doing this type of research greatly enhances what I’m about to learn and is far more useful than simply taking the required text and bargain hunting for the best price among five online retailers. In fact, one might argue that forcing students to choose their own textbooks will not only help draw them into the topic, but it will also tend to enhance their ability to think, rationalize, and make better decisions not only as it relates to the coursework at hand but later on in life itself.
Often textbooks will cover things from drastically different perspectives. As a simple example, let’s take the topic of statistics. There are dozens of broad-based statistics texts which try to be everything for everybody, but what if, as a student, I know I’m more interested in a directed area of application for my statistics study? I could easily find several textbooks geared specifically towards biology, economics, business, electrical engineering and even psychology. Even within the subcategory of electrical engineering there are probability and statistics books aimed at the beginner, the more advanced student, and even texts which are geared very specifically toward the budding information theorist. Perhaps as a student I might be better off using texts from writers like Pfeiffer, Leon-Garcia, or one of Renyi’s textbooks instead of a more broadly based engineering text like that of Walpole, Myers, Myers, and Ye? And even in this very small subsection of four books there is a fairly broad group of presentations made.
I think it’s entirely likely that a student studying a given topic will be much better motivated if she’s better engaged by the range of applications and subtopics which appeal more to her interests and future studies than being forced into using one of the more generic textbooks which try to cover 20 different applications. Naturally I’ll agree that having exposure to these other topics can be useful within the context of a broader liberal arts setting, but won’t the student who’s compared 20 different textbooks have naturally absorbed some of this in the process or get it from the professors lectures on the subject?
For the student, doing this type of choose-your-own-textbook research also has the lovely side effect of showing them where they stand in a particular subject. If they need remedial help, they’re already aware of what books they can turn to. Or, alternately, if they’re bored, they can jump ahead or use an alternate and more advanced text. The enterprising student may realize that the professor requires text A, but uses text B to draw from for lectures, and text C for formulating (often read: stealing) quiz and test material. Perhaps while using an alternate text they’ll become aware of subtopics and applications to which they might not have otherwise been privy.
Finally and fortuitously, it also doesn’t take more than a few moments to realize what wonderful and profound effects that such a competitive book choosing strategy will have on the textbook industry if it were widely adopted! I’d imagine there would be a much larger amount of direct competition in the textbook market which would almost necessitate newer and better textbooks at significantly reduced prices.
If you’re a student, I hope you’ll take the time for one of your upcoming classes to try this method and select your own “required” textbook as well as one or two recommended texts. I’m sure you’ll not only be more engaged by the subject, but that you’ll find the small amount of additional work well worth the effort. If you’re a professor, I hope you might not require a particular textbook for your next course, but might rather suggest a broad handful of interesting textbooks based on your own experience and spend 15 minutes of class time discussing the texts before making the student’s first assignment to choose their own textbook (and possibly subsequently asking them why they chose it.)
John Battelle recently posted a review of James Gleick’s last book The Information: A History, A Theory, A Flood. It reminds me that I find it almost laughable when the vast majority of the technology press and the digiterati bloviate about their beats when at its roots, they know almost nothing about how technology truly works or the mathematical or theoretical underpinnings of what is happening — and even worse that they don’t seem to really care.
I’ve seen hundreds of reviews and thousands of mentions of Steven Levy’s book In the Plex: How Google Thinks, Works, and Shapes Our Lives in the past few months, — in fact, Battelle reviewed it just before Gleick’s book — but I’ve seen few, if any, of Gleick’s book which I honestly think is a much more worthwhile read about what is going on in the world and has farther reaching implications about where we are headed.
I’ll give a BIG tip my hat to John for his efforts to have read Gleick and post his commentary and to continue to push the boundary further as he invites Gleick to speak at Web 2.0 Summit in the fall. I hope his efforts will bring the topic to the much larger tech community. I further hope he and others might take the time to read Claude Shannon’s original paper [.pdf download], and if he’s further interested in the concept of thermodynamic entropy, I can recommend Andre Thess’s text The Entropy Principle: Thermodynamics for the Unsatisfied, which I’ve recently discovered and think does a good (and logically) consistent job of defining the concept at a level accessible to the average public.
Matt Ridley’s The Rational Optimist: How Prosperity Evolves is going to be my new bible. This is certainly bound to be one of the most influential books I’ve read since Jared Diamond’s Guns, Germs, and Steel — what a spectacular thesis!
I am now going to recommend it to everyone that I meet and have already begun proselytizing its thesis. Certainly worth a second, third, and a successive rereads given the broad array of topics it covers in such a cohesive way. Simply and truly SPECTACULAR!
Dare to be an optimist…
For those interested in short tangential video related to the broader thesis take a look at Matt Ridley’s related TedX talk:
Nominated for quote of the week, which I encountered while reading Matt Ridley’s The Rational Optimist:
Charles Darwin’s Library from the Biodiversity Heritage Library
2011 Andrew Viterbi Lecture
Ming Hsieh Department of Electrical Engineering
“Adventures in Coding Theory”
Professor Elwyn Berlekamp
University of California, Berkeley
Gerontology Auditorium, Thursday, March 3, 4:30 to 5:30 p.m.
The inventors of error-correcting codes were initially motivated by problems in communications engineering. But coding theory has since also influenced several other fields, including memory technology, theoretical computer science, game theory, portfolio theory, and symbolic manipulation. This talk will recall some forays into these subjects.
Elwyn Berlekamp has been professor of mathematics and of electrical engineering and computer science at UC Berkeley since 1971; halftime since 1983, and Emeritus since 2002. He also has been active in several small companies in the sectors of computers-communications and finance. He is now chairman of Berkeley Quantitative LP, a small money-management company. He was chairman of the Board of Trustees of MSRI from 1994-1998, and was at the International Computer Science Institute from 2001-2003. He is a member of the National Academy of Sciences, the National Academy of Engineering, and the American Academy of Arts and Sciences. Berlekamp has 12 patented inventions, some of which were co-authored with USC Professor Emeritus Lloyd Welch. Some of Berlekamp’s algorithms for decoding Reed-Solomon codes are widely used on compact discs; others are NASA standards for deep space communications. He has more than 100 publications, including two books on algebraic coding theory and seven books on the mathematical theory of combinatorial games, including the popular Dots-and-Boxes Game: Sophisticated Child’s Play.
I wish I could be at this lecture in person today, but I’ll have to live with the live webcast.
A new study by Noam Sobel, of the Olfaction Research Group at the Weizmann Institute of Science in Rehovot, Israel, and others are reporting in the journal Science this week that men in their study who sniffed the tears of crying women produced less testosterone and found female faces less arousing.
Previous studies in animals such as mice and mole rats have shown that tears convey important chemical messages which are used to attract or repel others of the same species. There is good evidence for an interesting type means of higher-level chemical communication. These previous studies also incidentally show that “emotional” tears are chemically distinct from “eye-protecting” types of tears.
Scientific American’s “60 Second Science” (via link or listen below) podcast has a good audio overview of the study for those without the time to read the paper.
In press reports, Adam Anderson, a University of Toronto psychologist who was not involved with the study, posited that the results may imply that “tears have some influence on sexual selection, and that’s not something we associate with sadness.” He continued, “It could be a way of warding off unwanted advances.”
This study provides a new hypothesis for the evolution of crying in humans. (Now if only we could find some better reasons for laughter…)
The take home message may be that guys should not take their dates out to weepy chick flicks, or alternately women reluctantly accepting “pity dates” should force their suitors to exactly these types of testosterone damping films.
On Sunday, the Los Angeles Times printed a story about the future of reading entitled “Book publishers see their role as gatekeepers shrink.”
The article covers most of the story fairly well, but leaves out some fundamental pieces of the business picture. It discusses a few particular cases of some very well known authors in the publishing world including the likes of Stephen King, Seth Godin, Paulo Coehlo, Greg Bear, and Neal Stephenson and how new digital publishing platforms are slowly changing the publishing business.
Indeed, many authors are bypassing traditional publishing routes and self-publishing their works directly online, and many are taking a much larger slice of the financial rewards in doing so.
The article, however, completely fails to mention or address how new online methods will be handling editorial and publicity functions differently than they’re handled now, and the future of the publishing business both now and in the future relies on both significantly.
It is interesting, and not somewhat ironic to note that, even in the case of this particular article, as the newspaper business in which it finds its outlet, has changed possibly more drastically than the book publishing business. If reading the article online, one is forced to click through four different pages on which a minimum of five different (and in my opinion, terrifically) intrusive ads appear per page. Without getting into the details of the subject of advertising, even more interesting, is that many of these ads are served up by Google Ads based on keywords, so three just on the first page were specifically publishing related.
Two of the ads were soliciting people to self-publish their own work. One touts how easy it is to publish, while the other glosses over the publicity portion with a glib statement offering an additional “555 Book Promotion Tips”! (I’m personally wondering if there can possibly be so many book promotion tips?)
Following the link in the third ad on the first page to its advertised site one discovers it states:
Although I find the portion about “baby steps” particularly entertaining, the first thing I’ll note is that the typical person is likely more readily equipped with the ability to distribute and market a children’s book than they might be at crafting one. Sadly however, there are very few who are capable of any of these tasks at a particularly high level, which is why there are relatively few new childrens’ books on the market each year and the majority of sales are older tried-and-true titles.
I hope the average reader sees the above come-on as the twenty-first century equivalent of the snake oil salesman who is tempting the typical wanna-be-author to call about their so-called “Free” Children’s Book Publishing Guide. I’m sure recipients of the guide end up paying the publisher to get their book out the door and more likely than not, it doesn’t end up in main stream brick-and-mortar establishments like Barnes & Noble or Borders, but only sells a handful of copies in easy to reach online venues like Amazon. I might suggest that the majority of sales will come directly from the author and his or her friends and family. I would further argue that neither now nor in the immediate or even distant future that many aspiring authors will be self-publishing much of anything and managing to make even a modest living by doing so.
Now of course all of the above begs the question of why exactly is it that people need/want a traditional publisher? What role or function do publishers actually perform for the business and why might they be around in the coming future?
The typical publishing houses perform three primary functions: filtering/editing material, distributing material, and promoting material. The current significant threat to the publishing business from online retailers like Amazon.com, Barnes & Noble, Borders, and even the recently launched Google Books is the distribution platforms themselves. It certainly doesn’t take much to strike low cost deals with online retailers to distribute books, and even less so when they’re distributing them as e-books which cuts out the most significant cost in the business — that of the paper to print them on. This leaves traditional publishing houses with two remaining functions: filtering/editing material and the promotion/publicity function.
The Los Angeles Times article certainly doesn’t state it, but everyone you meet on the street could tell you that writers like Stephen King don’t really need any more publicity than what they’ve got already. Their fan followings are so significantly large that they only need to tell two people online that they’ve got a new book and they’ll sell thousands of copies of any book they release. In fact, I might wager that Stephen King could release ten horrific (don’t mistake this for horror) novels before their low quality would likely begin to significantly erode his sales numbers. If he’s releasing them on Amazon.com and keeping 70% of the income compared to the average 6-18% most writers are receiving, he’s in phenomenally good shape. (I’m sure given his status and track record in the publishing business, he’s receiving a much larger portion of his book sales from his publisher than 18% by the way; I’d also be willing to bet if he approached Amazon directly, he could get a better distribution deal than the currently offered 70/30 split.)
What will eventually sway the majority of the industry is when completely unknown new writers can publish into these electronic platforms and receive the marketing push they need to become the next Stephen King or Neal Stephenson. At the moment, none of the major e-book publishing platforms are giving much, if any, of this type of publicity to any of their new authors, and many aren’t even giving it to the major writers. Thus, currently, even the major writers are relying primarily on their traditional publishers for publicity to push their sales.
I will admit that when 80% of all readers are online and consuming their reading material in e-book format and utilizing the full support of social media and cross-collateralization of the best portion of their word-of-mouth, that perhaps authors won’t need as much PR help. But until that day platforms will significantly need to ramp it up. Financially one wonders what a platform like Amazon.com will charge for a front and center advertisement for a new best-seller to push sales? Will they be looking for a 50/50 split on those sales? Exclusivity in their channel? This is where the business will become even more dicey. Suddenly authors who think they’re shedding the chains of their current publishers will be shackling themselves with newer and more significant manacles and leg irons.
The last piece of the business that needs to be subsumed is the editorial portion of the manufacturing process. Agents and editors serve a significant role in that they filter out thousands and thousands of terrifically unreadable books. In fact, one might argue that even now they’re letting far too many marginal books through the system and into the market.
If we consider the millions of books housed in the Library of Congress and their general circulation, one might realize that only one tenth of a percent or less of books are receiving all the attention. Certainly classics like William Shakespeare and Charles Dickens are more widely read than the millions of nearly unknown writers who take up just as much shelf space in that esteemed library.
Most houses publish on the order of ten to a hundred titles per year, but they rely heavily on only one or two of them being major hits to cover not only the cost of the total failures, but to provide the company with some semblance of profit. (This model is not unlike the same way that the feature film business works in Hollywood; if you throw enough spaghetti, something is bound to stick.)
The question then becomes: “how does the e-publishing business accomplish this editing and publicity in a better and less expensive way?” This question needs to be looked at from a pre-publication as well as a post-publication perspective.
From the pre-publication viewpoint the Los Angeles Times article interestingly mentions that many authors appreciate having a “conversation” with their readers and allowing it to inform their work. However, creators of the stature of Stephen King cannot possibly take in and consume criticism from their thousands of fans in any reasonable way not to mention the detriment to their output if they were forced to read and deal with all that criticism and feedback. Even smaller stature authors often find it overwhelming to take in criticism from their agents, editors, and even a small handful of close friends, family, and colleagues. Taking a quick look at the acknowledgement portions of a few dozen books generally reveals fewer than 10 people being thanked much less hundreds of names from their general reading public – people they neither know well, much less trust implicitly.
From the post-publication perspective, both printing on demand and e-book formats excise one of the largest costs of the supply chain management portions of the publishing world, but staff costs and salary are certainly very close in line after them. One might argue that social media is the answer here and we can rely on services like LibraryThing, GoodReads, and others to supply this editorial/publicity process and eventually broad sampling and positive and negative reviews will win the day to cross good, but unknown writers into the popular consciousness. This may sound reasonable on the surface, but take a look at similar large recommendation services in the social media space like Yelp. These services already have hundreds of thousands of users, but they’re not nearly as useful as they need to be from a recommendation perspective and they’re not terrifically reliable in that they’re very often easily gamed. (Consider the number of positive reviews that appear on Yelp that are most likely written by the proprietors of the establishments themselves.) This outlet for editorial certainly has the potential to improve in the coming years, but it will still be quite some time before it has the possibility of totally ousting the current editorial and filtering regime.
From a mathematical and game theoretical perspective one must also consider how many people are going to subject themselves (willingly and for free) to some really bad reading material and then bother to write either a good or bad review of their experience. This particularly when the vast majority of readers are more than content to ride the coattails of the “suckers” who do the majority of the review work.
There are certainly a number of other factors at play in the publishing business as it changes form, but those discussed above are certainly significant in its continuing evolution. Given the state of technology and its speed, if people feel that the tradition publishing world will collapse, then we should take its evolution to the nth degree. Using an argument like this, then even platforms like Amazon and Google Books will eventually need to narrow their financial split with authors down to infinitesimal margins as authors should be able to control every portion of their work without any interlopers taking any portion of their proceeds. We’ll leave the discussion of whether all of this might fit into the concept of the tragedy of the commons for a future date.
The Los Angeles Times published an online article entitled “Barnes & Noble says e-books outsell physical books online.” While I understand that this is a quiet holiday week, the Times should be doing better work than simply republishing press releases from corporations trying to garner post-holiday sales. Some of the thoughts they might have included:
“Customers bought or downloaded 1 million e-books on Christmas day alone”?
There is certainly no debating the continuous growth of the electronic book industry; even Amazon.com has said they’re selling more electronic books than physical books. The key word in the quoted sentence above is “or”. I seriously doubt a significant portion of the 1 million e-books were actually purchased on Christmas day. The real investigative journalism here would have discovered the percentage of free (primarily public domain) e-books that were downloaded versus those that were purchased.
Given that analysts estimate 2 million Nooks have sold (the majority within the last six months and likely the preponderance of them since Thanksgiving) this means that half of all Nook users downloaded at least one book on Christmas day. Perhaps this isn’t surprising for those who would have received a Nook as a holiday present and may have downloaded one or more e-books to test out its functionality. The real question will remain, how many of these 2 million users will actually be purchasing books in e-book format 6 months from now?
I’d also be curious to know if the analyst estimate is 2 million units sold to consumers or 2 million shipped to retail? I would bet that it is units shipped and not sold.
I hope the Times will be doing something besides transcription (or worse: cut and paste) after the holidays.
You know that automated machine language translation is not in good shape when the editor-in-chief of the IEEE’s Signal Processing Magazine says:
Over the summer, Ars Technica and others reported about the new feature Tab Candy being built into Firefox by Aza Raskin. Essentially it’s a better graphical way of keeping “tabs” on the hundreds of tabs some of us like to keep open for our daily workflows. One can now group series of related tabs together and view them separately from other groupings. Many of us loved the feature in the early Minefield build of Firefox, but the recent release of Firefox 4.0 beta 7 includes the nearly finished and stable version of Tab Candy, which has been renamed Panorama, and it is great.
Though Panorama is a brilliant, one of the functionalities it doesn’t have and which is mentioned in the Ars Technica article, is that of “reading later.” I find, as do many, that the majority of the tabs I keep open during the day are for things I have the best intentions of reading later. Sadly, often days go by and many of these tabs remain open and unread because I simply don’t have time during the work day and don’t come back later in my free time to give them the attention they deserve. (It also coincidentally has the side effect of soaking up additional memory, a symptom which can be remedied with this helpful tip from Lifehacker.)
I’ve now got the answer for these unread stories in neglected tabs: Instapaper.com. Instapaper, the brainchild of former Tumblr exec Marco Arment, is similar to many extant bookmarking tools, but with increased functionality that makes it infinitely easier to come back and actually read those stories. Typically I use the Instapaper bookmarklet tool on a webpage with a story I want to come back to later, and it bookmarks the story for me and is configurable to allow closing that tab once done.
The unique portion of the tool is that Instapaper provides multiple ways of pulling out the bookmarked content for easy reading later. For those who are RSS fans, you can subscribe to your bookmarked stream with tools like Google Reader. But even better, the site allows one to easily download .mobi or .epub bundled files of the stories that can be put onto your e-reader of choice. (I personally email copies to my Kindle 3 (affiliate link.)) Once this is done, I can simply and easily read all those stories I never got around to, reading them like a daily personal newspaper at my convenience – something I’m much more prone to do given my addiction to my Kindle, which provides a so-called “sit back experience.”
As if all this isn’t good enough, Instapaper allows you to create differentiable folders (along with separate requisite RSS feeds and bookmarklet tools) so that you can easily separate your newspaper articles from your tech articles, or even your communication theory research papers from your genetics scientific articles. This can allow you to take your daily twitter feed article links and turn them into a personalized newspaper for easy reading on your choice of e-book reader. With the upcoming pending Christmas of the e-reader and tablets, this is as close to perfect timing for the killer app as a developer could hope.
The e-book reader combined with Instapaper is easily the best invention since Gutenberg’s original press.
(N.B.: One could bookmark every interesting article in the daily New York Times and read them in e-book format this way, but I would recommend using an application like Calibre for reducing the time required for doing this instead. Instapaper is best used as a custom newspaper creator.)