Workshop Announcement: Entropy and Information in Biological Systems

I rarely, if ever, reblog anything here, but this particular post from John Baez’s blog Azimuth is so on-topic, that attempting to embellish it seems silly.

Entropy and Information in Biological Systems (Part 2)

John Harte, Marc Harper and I are running a workshop! Now you can apply here to attend:

Information and entropy in biological systems, National Institute for Mathematical and Biological Synthesis, Knoxville Tennesee, Wednesday-Friday, 8-10 April 2015.

Click the link, read the stuff and scroll down to “CLICK HERE” to apply. The deadline is 12 November 2014.

Financial support for travel, meals, and lodging is available for workshop attendees who need it. We will choose among the applicants and invite 10-15 of them.

The idea
Information theory and entropy methods are becoming powerful tools in biology, from the level of individual cells, to whole ecosystems, to experimental design, model-building, and the measurement of biodiversity. The aim of this investigative workshop is to synthesize different ways of applying these concepts to help systematize and unify work in biological systems. Early attempts at “grand syntheses” often misfired, but applications of information theory and entropy to specific highly focused topics in biology have been increasingly successful. In ecology, entropy maximization methods have proven successful in predicting the distribution and abundance of species. Entropy is also widely used as a measure of biodiversity. Work on the role of information in game theory has shed new light on evolution. As a population evolves, it can be seen as gaining information about its environment. The principle of maximum entropy production has emerged as a fascinating yet controversial approach to predicting the behavior of biological systems, from individual organisms to whole ecosystems. This investigative workshop will bring together top researchers from these diverse fields to share insights and methods and address some long-standing conceptual problems.

So, here are the goals of our workshop:

  •  To study the validity of the principle of Maximum Entropy Production (MEP), which states that biological systems – and indeed all open, non-equilibrium systems – act to produce entropy at the maximum rate.
  • To familiarize all the participants with applications to ecology of the MaxEnt method: choosing the probabilistic hypothesis with the highest entropy subject to the constraints of our data. We will compare MaxEnt with competing approaches and examine whether MaxEnt provides a sufficient justification for the principle of MEP.
  • To clarify relations between known characterizations of entropy, the use of entropy as a measure of biodiversity, and the use of MaxEnt methods in ecology.
  • To develop the concept of evolutionary games as “learning” processes in which information is gained over time.
  • To study the interplay between information theory and the thermodynamics of individual cells and organelles.

For more details, go here.

If you’ve got colleagues who might be interested in this, please let them know. You can download a PDF suitable for printing and putting on a bulletin board by clicking on this:

The Mnemonic Major System and Gregg Shorthand Have the Same Underlying Structure!

I’ve been a proponent and user of a variety of mnemonic systems since I was about eleven years old.  The two biggest and most useful in my mind are commonly known as the “method of loci” and the “major system.” The major system is also variously known as the phonetic number system, the phonetic mnemonic system, or Hergione’s mnemonic system after French mathematician and astronomer Pierre Hérigone (1580-1643) who is thought to have originated its use.

The major system generally works by converting numbers into consonant sounds and then from there into words by adding vowels under the overarching principle that images (of the words) can be remembered more easily than the numbers themselves. For instance, one could memorize one’s grocery list of a hundred items by associating each shopping item on a numbered list with the word associated with the individual number in the list. As an example, if item 22 on the list is lemons, one could translate the number 22 as “nun” within the major system and then associate or picture a nun with lemons – perhaps a nun in full habit taking a bath in lemons to make the image stick in one’s memory better. Then at the grocery store, when going down one’s list, when arriving at number 22 on the list, one automatically translates the number 22 to “nun” which will almost immediately conjure the image of a nun taking a bath in lemons which gives one the item on the list that needed to be remembered.  This comes in handy particularly when one needs to be able to remember large lists of items in and out of order.

The following generalized chart, which can be found in a hoard of books and websites on the topic, is fairly canonical for the overall system:

Numeral IPA Associated Consonants Mnemonic for remembering the numeral and consonant relationship
0 /s/ /z/ s, z, soft c “z” is the first letter of zero; the other letters have a similar sound
1 /t/ /d/ t, d t & d have one downstroke and sound similar (some variant systems include “th”)
2 /n/ n n has two downstrokes
3 /m/ m m has three downstrokes; m looks like a “3” on its side
4 /r/ r last letter of four; 4 and R are almost mirror images of each other
5 /l/ l L is the Roman Numeral for 50
6 /ʃ/ /ʒ/ /tʃ/ /dʒ/ j, sh, soft g, soft “ch” a script j has a lower loop; g is almost a 6 rotated
7 /k/ /ɡ/ k, hard c, hard g, hard “ch”, q, qu capital K “contains” two sevens (some variant systems include “ng”)
8 /f/ /v/ f, v script f resembles a figure-8; v sounds similar (v is a voiced f)
9 /p/ /b/ p, b p is a mirror-image 9; b sounds similar and resembles a 9 rolled around
Unassigned Vowel sounds, w,h,y w and h are considered half-vowels; these can be used anywhere without changing a word’s number value

There are a variety of ways to use the major system as a code in addition to its uses in mnemonic settings.  When I was a youth, I used it to write coded messages and to encrypt a variety of things for personal use. After I had originally read Dr. Bruno Furst’s series of booklets entitled You Can Remember: A Home Study Course in Memory and Concentration 1, I had always wanted to spend some time creating an alternate method of writing using the method.  Sadly I never made the time to do the project, but yesterday I made a very interesting discovery that, to my knowledge, doesn’t seem to have been previously noticed!

My discovery began last week when I read an article in The Atlantic by journalist Dennis Hollier entitled How to Write 225 Words Per Minute with a Pen: A Lesson in the Lost Technology of Shorthand. 2 In the article, which starts off with a mention of the Livescribe pen – one of my favorite tools, Mr. Hollier outlines the use of the Gregg System of Shorthand which was invented by John Robert Gregg in 1888. The description of the method was intriguing enough to me that I read a dozen additional general articles on shorthand on the internet and purchased a copy of Louis A. Leslie’s two volume text Gregg Shorthand: Functional Method. 3

I was shocked, on page x of the front matter, just before the first page of the text, to find the following “Alphabet of Gregg Shorthand”:

Alphabet of Gregg Shorthand
Alphabet of Gregg Shorthand
Gregg Shorthand is using EXACTLY the same consonant-type breakdown of the alphabet as the major system!

Apparently I wasn’t the first to have the idea to turn the major system into a system of writing. The fact that the consonant breakdowns for the major system coincide almost directly to those for the shorthand method used by Gregg cannot be a coincidence!

The Gregg system works incredibly well precisely because the major system works so well. The biggest difference between the two systems is that Gregg utilizes a series of strokes (circles and semicircles) to indicate particular vowel sounds which allows for better differentiation of words which the major system doesn’t generally take into consideration. From an information theoretic standpoint, this is almost required to make the coding from one alphabet to the other possible, but much like ancient Hebrew, leaving out the vowels doesn’t remove that much information. Gregg, also like Hebrew, also uses dots and dashes above or below certain letters to indicate the precise sound of many of its vowels.

The upside of all of this is that the major system is incredibly easy to learn and use, and from here, learning Gregg shorthand is just a hop, skip , and a jump – heck, it’s really only just a hop because the underlying structure is so similar. Naturally as with the major system, one must commit some time to practicing it to improve on speed and accuracy, but the general learning of the system is incredibly straightforward.

Because the associations between the two systems are so similar, I wasn’t too surprised to find that some of the descriptions of why certain strokes were used for certain letters were very similar to the mnemonics for why certain letters were used for certain numbers in the major system.

From Dr. Bruno Furst's "You Can Remember!" The mnemonic for remembering 6, 7, 8, & 9 in the major system
From Dr. Bruno Furst’s “You Can Remember!”
The mnemonic for remembering 6, 7, 8, & 9 in the major system.
From Louis Leslie's "Gregg Shorthand: Functional Method" The mnemonic for remembering the strokes for k and g.
From Louis Leslie’s “Gregg Shorthand: Functional Method”
The mnemonic for remembering the strokes for k and g.

One thing I have noticed in my studies on these topics is the occasional references to the letter combinations “NG” and “NK”. I’m curious why these are singled out in some of these systems? I have a strong suspicion that their inclusion/exclusion in various incarnations of their respective systems may be helpful in dating the evolution of these systems over time.

I’m aware that various versions of shorthand have appeared over the centuries with the first recorded having been the “Tironian Notes” of Marcus Tullius Tiro (103-4 BCE) who apparently used his system to write down the speeches of his master Cicero. I’m now much more curious at what point the concepts for shorthand and the major system crossed paths or converged? My assumption would be that it happened in the late Renaissance, but it would be nice to have the underlying references and support for such a timeline. Perhaps it was with Timothy Bright’s publication of  Characterie; An Arte of Shorte, Swifte and Secrete Writing by Character (1588) 4, John Willis’s Art of Stenography (1602) 5, Edmond Willis’s An abbreviation of writing by character (1618) 6, or Thomas Shelton’s Short Writing (1626) 7? Shelton’s system was certainly very popular and well know because it was used by both Samuel Pepys and Sir Isaac Newton.

Certainly some in-depth research will tell, though if anyone has ideas, please don’t hesitate to indicate your ideas in the comments.

UPDATE on 7/6/14:

I’m adding a new chart making the correspondence between the major system and Gregg Shorthand more explicit.

A chart indicating the correspondences between the major system and Gregg Shorthand.
A chart indicating the correspondences between
the major system and Gregg Shorthand.

References

1.
Furst B. You Can Remember: A Home Study Course in Memory and Concentration. Markus-Campbell Co.; 1965.
2.
Hollier D. How to Write 225 Words Per Minute With a Pen: A lesson in the lost technology of shorthand. The Atlantic. http://www.theatlantic.com/technology/archive/2014/06/yeah-i-still-use-shorthand-and-a-smartpen/373281/. Published 2014.
3.
Leslie LA. Gregg Shorthand: Functional Method. Gregg Publishing Company; 1947.
4.
Bright T (1550-1615). Characterie; An Arte of Shorte, Swifte and Secrete Writing by Character. 1st ed. I. Windet; reprinted by W. Holmes, Ulverstone; 1588. https://archive.org/details/characteriearteo00brig.
5.
Willis J. Art of Stenography.; 1602.
6.
Willis E. An Abbreviation of Writing by Character.; 1618.
7.
Shelton T. Short Writing.; 1626.

Latin Pedagogy and the Digital Humanities

I’ve long been a student of the humanities (and particularly the classics) and have recently begun reviewing over my very old and decrepit knowledge of Latin.  It’s been two decades since I made a significant study of classical languages, and lately (as the result of conversations with friends like Dave Harris, Jim Houser, Larry Richardson, and John Kountouris) I’ve been drawn to reviewing them for reading a variety of classical texts in their original languages. Fortunately, in the intervening years, quite a lot has changed in the tools relating to pedagogy for language acquisition.

Jenny's Second Year Latin
A copy of Jenny’s Latin text which I had used 20 years ago and recently acquired a new copy for the pittance of $3.25.

Internet

The biggest change in the intervening time is the spread of the  internet which supplies a broad variety of related websites with not only interesting resources for things like basic reading and writing, but even audio sources apparently including listening to the nightly news in Latin. There are a variety of blogs on Latin as well as even online courseware, podcasts, pronunciation recordings, and even free textbooks. I’ve written briefly about the RapGenius platform before, but I feel compelled to mention it as a potentially powerful resource as well. (Julius Caesar, Seneca, Ovid, Cicero, et al.) There is a paucity of these sources in a general sense in comparison with other modern languages, but given the size of the niche, there is quite a lot out there, and certainly a mountain in comparison to what existed only twenty years ago.

Software

There has also been a spread of pedagogic aids like flashcard software including Anki and Mnemosyne with desktop, web-based, and even mobile-based versions making  learning available in almost any situation. The psychology and learning research behind these types of technologies has really come a long way toward assisting students to best make use of their time in learning and retaining what they’ve learned in long term memory.  Simple mobile applications like Duolingo exist for a variety of languages – though one doesn’t currently exist for classical Latin (yet).

Digital Humanities

The other great change is the advancement of the digital humanities which allows for a lot of interesting applications of knowledge acquisition. One particular one that I ran across this week was the Dickinson College Commentaries (DCC). Specifically a handful of scholars have compiled and documented a list of the most common core vocabulary words in Latin (and in Greek) based on their frequency of appearance in extant works.  This very specific data is of interest to me in relation to my work in information theory, but it also becomes a tremendously handy tool when attempting to learn and master a language.  It is a truly impressive fact that, simply by knowing that if one can memorize and master about 250 words in Latin, it will allow them to read and understand 50% of most written Latin.  Further, knowledge of 1,500 Latin words will put one at the 80% level of vocabulary mastery for most texts.  Mastering even a very small list of vocabulary allows one to read a large variety of texts very comfortably.  I can only think about the old concept of a concordance (which was generally limited to heavily studied texts like the Bible or possibly Shakespeare) which has now been put on some serious steroids for entire cultures. Another half step and one arrives at the Google Ngram Viewer.

The best part is that one can, with very little technical knowledge, easily download the DCC Core Latin Vocabulary (itself a huge research undertaking) and upload and share it through the Anki platform, for example, to benefit a fairly large community of other scholars, learners, and teachers. With a variety of easy-to-use tools, shortly it may be even that much easier to learn a language like Latin – potentially to the point that it is no longer a dead language. For those interested, you can find my version of the shared DCC Core Latin Vocabulary for Anki online; the DCC’s Chris Francese has posted details and a version for Mnemosyne already.

[Editor’s note: Anki’s web service occasionally clears decks of cards from their servers, so if you find that the Anki link to the DCC Core Latin is not working, please leave a comment below, and we’ll re-upload the deck for shared use.]

What tools and tricks do you use for language study and pedagogy?

Speed Reading on Web and Mobile

“Hi, my name is Chris, and I’m a Read-aholic.”

I

‘ll be the first to admit that I’m a reading junkie, but unfortunately there isn’t (yet) a 12 step program to help me.  I love reading lots of different types of things across an array of platforms (books, newspapers, magazines, computer, web, phone, tablet, apps) and topics (fiction/non-fiction and especially history, biography, economics, popular science, etc.).  My biggest problem and one others surely face is time.

There are so many things I want to read, and far too little time to do it in.  Over the past several years, I’ve spent an almost unreasonable amount of time thinking about what I consume and (possibly more importantly) how to intelligently consume more of it. I’ve spent so much time delving into it that I’ve befriended a professor and fellow renaissance man (literally and figuratively) who gave me a personal thank you in his opening to a best-selling book entitled “The Thinking Life: How to Thrive in an Age of Distraction.”

Information Consumption

At least twice a year I look at my reading consumption and work on how to improve it, all the while trying to maintain a level of quality and usefulness in what I’m consuming and why I’m consuming it.

  • I continually subscribe to new and interesting sources.
  • I close off subscriptions to old sources that I find uninteresting, repetitive (goodbye echo chamber), and those that are (or become) generally useless.
  • I carefully monitor the huge volumes of junk email that end up in my inbox and trim down on the useless material that I never seem to read, so that I’ll have more time to focus on what is important.
  • I’ve taken up listening to audiobooks to better utilize my time in the car while commuting.
  • I’ve generally quit reading large swaths of social media for their general inability to uncover truly interesting sources.
  • I’ve used some portions of social media to find other interesting people collating and curating areas I find interesting, but which I don’t have the time to read through everything myself.  Why waste my time reading hundreds of articles, when I can rely on a small handful of people to read them and filter out the best of the best for myself? Twitter lists in particular are an awesome thing.
  • I’ve given up on things like “listicles” or stories from internet click farm sources like BuzzFeed which can have some truly excellent linkbait-type headlines, but I always felt like I’ve completely wasted my time clicking through to them.

A New Solution

About six months ago in the mountain of tech journalism I love reading, I ran across a site launch notice about a tech start-up called Spritz which promised a radically different solution for the other side of the coin relating to my reading problem: speeding the entire process up!  Unfortunately, despite a few intriguing samples at the time (and some great details on the problem and their solution), they weren’t actually delivering a product.

Well, all that seems to have changed in the past few weeks. I’ve waited somewhat patiently and occasionally checked back on their progress, but following a recent mention on Charlie Rose, and some serious digging around on the broader internet, I’ve found some worthwhile tools that have sprouted out of their efforts.  Most importantly, Spritz itself now has a bookmarklet that seems to deliver on their promise of improving my reading speeds for online content. With the bookmarklet installed, one can go to almost any web article, click on the bookmarklet and then sit back and just read at almost any desired speed.  Their technology uses a modified version of the 1970’s technology known as Rapid Serial Visual Presentation (RSVP) to speed up your reading ability, but does so in a way that is easier to effectuate with web and mobile technologies.  Essentially they present words serially in the same position on your screen with an optimized center mass so that one’s eyes stay still while reading instead of doing the typical saccaddic eye movements which occur with typical reading – and slow the process down.

 

A photo of how Spritz works for speed reading on the web.
Spritz for speed reading the web.

 

As a biomedical engineer, I feel compelled to note the interesting physiologic phenomenon that if one sits in a rotatable chair and spins with one’s eyes closed and their fingers lightly placed on their eyelids, one will feel the eye’s saccades even though one isn’t actually seeing anything.

Spritz also allows one to create an account and log in so that the service will remember your previously set reading speed. Their website does such a great job of explaining their concept, I’ll leave it to the reader to take a peek; but you may want to visit their bookmarklet page directly, as their own website didn’t seem to have a link to it initially.

As a sample of how Spritz works on the web, OysterBooks is hosting a Spritz-able version of Stephen R. Covey’s book 7 Habits of Highly Effective People.

Naturally, Spritz’s solution is not a catch-all for everything I’d like to read, but it covers an interesting subcategory that will make things useful and easier.  Though trying to speed read journal articles, textbooks, and other technical literature isn’t the best idea in the world, Spritz will help me plow through more fiction and more leisurely types of magazine and online articles that are of general interest. I generally enjoy and appreciate these types of journalism and work, but just can’t always justify taking the time away from more academic pursuits to delve into them. Some will still require some further thought after-the-fact to really get their full value out of them, but at least I can cover the additional ground without wasting all the additional time to do so. I find I can easily double or triple my usual reading speed without any real loss of comprehension.

In the last week or so since installing a my several new speed reading bookmarklets, I’ve begun using them almost religiously in my daily reading regimen.

I’ll also note in passing that some studies suggest that this type of reading modality has helped those who face difficulties with dyslexia.

A picture of the Spritz RSVP reading interface featuring the word Boffosocko.
How to read Boffosocko faster than you though you could…

 

Speed Reading Competition

Naturally, since this is a great idea, there’s a bit of competition in the speed reading arena.

There are a small handful of web and app technologies which are built upon the RSVP concept:

  • Clayton Morris has also developed an iOS application called ReadQuick, which is based on the same concept as Spritz, but is only available via app and not on web.
  • Rich Jones has developed a program called OpenSpritz.  His version is opensource and has an Android port for mobile.
  • There’s also another similar bookmarklet called Squirt which also incorporates some nice UI tweaks and some of the technology from Readability as well.
  • For those wishing to Spritz .pdf or .txt documents, one can upload them using Readsy which uses Spritz’s open API to allow these types of functionalities.
  • There are also a variety of similar free apps in the Google Play store which follow the RSVP technology model.
  • Those on the Amazon (or Kindle Fire/Android Platform) will appreciate the Balto App which utilizes RSVP and is not only one of the more fully functional apps in the space, but it also has the ability to unpack Kindle formatted books (i.e. deal with Amazon’s DRM) to allow speed reading Kindle books. While there is a free version, the $1.99 paid version is more than well worth the price for the additional perks.

On and off for the past couple of years, I’ve also used a web service and app called Readfa.st which is a somewhat useful, but generally painful way to improve one’s speed reading. It also has a handy bookmarklet, but just wasn’t as useful as I had always hoped it might be. It’s interesting, but not as interesting or as useful as Spritz (and other RSVP technology) in my opinion since it feels more fatiguing to read in this manner

 

Bookmarklet Junkie Addendum

In addition to the handful of speed reading bookmarklets I’ve mentioned above, I’ve got over 50 bookmarklets in a folder on my web browser toolbar. I easily use about a dozen on a daily basis. Bookmarklets make my internet world much prettier, nicer, and cleaner with a range of simple clever code.  Many are for URL shortening, sharing content to a variety of social networks quickly, but a large number of the ones I use are for reading-related tasks which I feel compelled to include here: web clippers for Evernote and OneNote, Evernote’s Clearly, Readability, Instapaper, Pocket, Mendeley (for reading journal articles), and GoodReads.

Do you have a favorite speed reading application (or bookmarklet)?

The Single Biggest Problem in Communication

apocryphally attributed to George Bernard Shaw,
but more likely William H. Whyte in Fortune, “Is Anybody Listening?” Start Page 77, Quote Page 174, Published by Time, Inc., New York (September 1950)

 

George Bernard Shaw shading his eyes with his hands
 

CECAM Workshop: “Entropy in Biomolecular Systems”

On Friday, I had an excellent and stimulating conversation with Arieh Ben-Naim about his recent writing and work, and he mentioned in passing that he had been invited to a conference relating to entropy and biology in Vienna.  A quick websearch found it quickly, and not having heard about it myself yet, I thought I’d pass it along to others who are regular readers and interested in the area.

The workshop on “Entropy in Biomolecular Systems” is being hosted by the Centre Européen de Calcul Atomique et Moléculaire (CECAM)

Logo for the Centre Européen de Calcul Atomique et Moléculaire (CECAM)

Location: DACAM, Max F. Perutz Laboratories, University of Vienna, Dr. Bohrgasse 9, A-1030, Vienna, Austria
Dates: May 14, 2014 to May 17, 2014

The workshop is being organized by:

  • Richard Henchman (University of Manchester, United Kingdom)
  • Bojan Zagrovic (University of Vienna, Austria)
  • Michel Cuendet (Swiss Institute of Bioinformatics, Lausanne, Switzerland and Weill Cornell Medical College, New York, USA)
  • Chris Oostenbrink (University of Natural Resources and Life Sciences, Austria)

It’s being supported by CECAM, the European Research Council, and the Royal Society of Chemistry’s Statistical Mechanics and Thermodynamics Group.

I’ll note that the registration deadline is on April 21 with a payment deadline of April 30, so check in quickly if you haven’t already.

The summary from the workshop website states:

This workshop brings together the world’s experts to address the challenges of determining the entropy of biomolecular systems, either by experiment or computer simulation. Entropy is one the main driving forces for any biological process such as binding, folding, partitioning and reacting. Our deficient understandng of entropy, however, means that such important processes remain controversial and only partially understood. Contributions of water, ions, cofactors, and biomolecular flexibility are actively examined but yet to be resolved. The state-of-the-art of each entropy method will be presented and explained, highlighting its capabilities and deficiencies. This will be followed by intensive discussion on the main areas that need improving, leading suitable actions and collaborations to address the main biological and industrial questions.

Further details on the workshop can be found on the CECAM website.

 

As always, details on other upcoming workshops and conferences relating to information theory and biology can be found on our ITBio Conferences/Workshops page.

 

 

2014 Andrew J. Viterbi Distinguished Lecture in Communication: Abbas El Gamal

The USC Viterbi School has recently announced Professor Abbas El Gamal of Stanford University will present the 2014 Andrew J. Viterbi Distinguished Lecture in Communication. The 12th annual lecture entitled “Common Information” will be given on Thursday, April 17, 2014 at 4:00 PM at the University of Southern California in the Seeley Wintersmith Mudd Memorial Hall of Philosophy (MHP) room 101. A reception will precede the lecture at 3:00 PM.

USC’s Viterbi School of Engineering has provided the following abstract for the talk:

Entropy, introduced by Shannon in 1948, arises naturally as a universal measure of information in single-source compression, randomness extraction, and random number generation. In distributed systems, such as communication networks, multiprocessors, distributed storage, and sensor networks, there are multiple correlated sources to be processed jointly. The information that is common between these sources can be utilized, for example, to reduce the amount of communication needed for compression, computing, simulation, and secret key generation. My talk will focus on the question of how such common information should be measured. While our understanding of common information is far from complete, I will aim to demonstrate the richness of this question through the lens of network information theory. I will show that, depending on the distributed information processing task considered, there can be several well-motivated measures of common information. Along the way, I will present some of the key models, ideas, and tools of information theory, which invite further investigation into this intriguing subject. Some parts of this talk are based on recent joint work with Gowtham Kumar and Cheuk Ting Li and on discussions with Young-Han Kim.

Headshot of Abbas El GamalBiography: Abbas El Gamal is the Hitachi America Professor in the School of Engineering and Chair of the Department of Electrical Engineering at Stanford University. He received his Ph.D. degree in electrical engineering from Stanford University in 1978. He was an Assistant Professor in the Department of Electrical Engineering at the University of Southern California (USC) from 1978 to 1980. His research interests and contributions have spanned the areas of information theory, wireless networks, CMOS imaging sensors and systems, and integrated circuit design and design automation. He has authored or coauthored over 200 papers and 30 patents in these areas. He is coauthor of the book Network Information Theory (Cambridge Press 2011). He has won several honors and awards, including the 2012 Claude E. Shannon Award, and the 2004 Infocom best paper award. He is a member of the National Academy of Engineering and a Fellow of the IEEE. He has been active in several IEEE societies, including serving on the Board on Governors of the IT society where he is currently its President. He cofounded and/or served in various leadership roles at several semiconductor, EDA, and biotechnology companies.

Audiences: Everyone Is Invited

“Living in Cybernetics” – 50th Anniversary Conference of the American Society for Cybernetics

The American Society for Cybernetics is currently planning their 50th Anniversary Conference. Entitled “Living in Cybernetics”, it will be held between August 3 and August 9, 2014, at George Washington University in Washington D.C. For more registration and additional details please visit the conference website.

 

Living in Cybernetics

Rap Genius, a Textual Annotation Browser for Education, Digital Humanities, Science, and Publishing

Since the beginning of January, I’ve come back to regularly browsing and using the website Rap GeniusI’m sure that some of the education uses including poetry and annotations of classics had existed the last time I had visited, but I was very interested in seeing some of the scientific journal article uses which I hadn’t seen before. Very quickly browsing around opened up a wealth of ideas for using the platform within the digital humanities as well as for a variety of educational uses.

Rap Genius logo

Overview of Rap Genius

Briefly, the Rap Genius website was originally set up as an innovative lyrics service to allow users to not only upload song lyrics, but to mark them up with annotations as to the meanings of words, phrases, and provide information about the pop-culture references within the lyrics themselves.  (It’s not too terribly different from Google’s now-defunct Sidewicki or the impressive Highbrow, textual annotation browser, but has some subtle differences as well as improvements.)

Users can use not only text, but photos, video, and even audio to supplement the listings. Built-in functionality includes the ability to link the works to popular social media audio services SoundCloud, and Spotify as well as YouTube. Alternately one might think of it as VH1’s “Pop-up Video”, but for text on the Internet. Ultimately the site expanded to include the topics of rock, poetry, and news.  The rock section is fairly straightforward following the format of the rap section while the poetry section includes not only works of poetry (from The Rime of the Ancient Mariner to the King James version of The Bible), but also plays (the works of William Shakespeare) and complete novels (like F. Scott Fitzgerald’s The Great Gatsby.) News includes articles as well as cultural touchstones like the 2013 White House Correspondents’ Dinner Speech and the recent State of the Union. Ultimately all of the channels within Rap Genius platform share the same types of functionality, but are applied to slightly different categories to help differentiate the content and make things easier to find.  Eventually there may be a specific “Education Genius” (or other) landing page(s) to split out the content in the future depending on user needs.

On even its first blush, I can see this type of website functionality being used in a variety of educational settings including Open Access Journals, classroom use, for close readings, for MOOCs, publishing in general, and even for maintaining simple-to-use websites for classes. The best part is that the ecosystem is very actively growing and expanding with a recent release of an iPhone app and an announcement of a major deal with Universal to license music lyrics.

General Education Use

To begin with, Rap Genius’ YouTube channel includes an excellent short video on how Poetry Genius might be used in a classroom setting for facilitating close-readings. In addition to the ability to make annotations, the site can be used to maintain a class specific website (no need to use other blogging platforms like WordPress or Blogger for things like this anymore) along with nice additions like maintaining a class roster built right in.  Once material begins to be posted, students and teachers alike are given a broad set of tools to add content, make annotations, ask questions, and provide answers in an almost real-time setting.

Screen capture from Poetry Genius featuring The Great Gatsby

MOOC Use Cases

Given the rapid growth of the MOOC-revolution (massively open online courseware) over the past several years, one of the remaining difficulties in administering such a class can hinge not only on being able to easily provide audio visual content to students, but allow them a means of easily interacting with it and each other in the learning process.  Poetry Genius (aka Education Genius) has a very interesting view into solving both of these problems, and, in fact, I can easily see the current version of the platform being used to replace competing platforms like Coursera, EdX, Udacity and others in a whole cloth fashion.

Currently most MOOC’s provide some type of simple topic-based threaded fora in which students post comments and questions as well as answers.  In many MOOCs this format becomes ungainly because of the size of the class (10,000+ students) and the quality of the content which is being placed into it. Many students simply eschew the fora because the time commitment per amount of knowledge/value gained is simply not worth their while. Within the Poetry Genius platform, students can comment directly on the material or ask questions, or even propose improvements, and the administrators (the professor or teaching assistants in this case) can accept, reject or send feedback request to students to amend their work and add it to the larger annotated work.  Fellow classmates can also vote up or down individual comments.

As I was noticing the interesting educational-related functionality of the Rap Genius platform, I ran across what is presumably the first MOOC attempting to integrate the platform into its pedagogical structure. Dr. Laura Nasrallah’s HarvardX course “Early Christianity: The Letters of Paul,” which started in January, asks students to also create Poetry Genius accounts to read and comment on the biblical texts which are a part of the course. The difficult portion of attempting to use Poetry Genius for this course is the thousands of “me-too” posters who are simply making what one might consider to be “throw-away” commentary rather than the intended “close reading” commentary for a more academic environment. (This type of posting is also seen in many of the fora-based online courses.) Not enough students are contributing substantial material, and when they are, it needs to be better and more quickly edited and curated into the main post to provide greater value to students as they’re reading along. Thus when 20,000 students jump into the fray, there’s too much initial chaos and the value that is being extracted out of it upon initial use is fairly limited – particularly if one is browsing through dozens of useless comments. It’s not until after-the-fact – once comments have been accepted/curated – that the real value will emerge. The course staff is going to have to spend more time doing this function in real time to provide greater value to the students in the class, particularly given the high number of people without intense scholarly training just jumping into the system and filling it with generally useless commentary. In internet parlance, the Poetry Genius site is experiencing the “Robert Scoble Effect” which changes the experience on it. (By way of explanation, Robert Scoble is a technology journalist/pundit/early-adopter with a massive follower base.  His power-user approach and his large following can drastically change his experience with web-based technology compared to the  common everyday user. It can also often bring down new services as was common in the early days of the social media movement.)

Typically with the average poem or rap song, the commentary grows slowly/organically and is edited along the way. In a MOOC setting with potentially hundreds of thousands of students, the commentary is like a massive fire-hose which makes it seemingly useless without immediate real-time editing. Poetry Genius may need a slightly different model for using their platform in larger MOOC-style courses versus the smaller classroom settings seen in high school or college (10-100 students). In the particular case for “The Letters of Paul,” if the course staff had gone into the platform first and seeded some of the readings with their own sample commentary to act as a model of what is expected, then the students would be a bit more accepting of what is expected. I understand Dr. Nasrallah and her teaching assistants are in the system and annotating as well, but it should also be more obvious which annotations are hers (or those of teaching assistants) to help better guide the “discussion” and act as a model. Certainly the materials generated on Poetry Genius will be much more useful for future students who take the course in future iterations. Naturally, Poetry Genius exists for the primary use of annotation, while I’m sure that the creators will be tweaking classroom-specific use as the platform grows and user needs/requirements change.

As a contrast to the HarvardX class, and for an additional example, one can also take a peek at Cathy Davidson’s Rap Genius presence for her Coursera class “The History and Future (Mostly) of Higher Education.”

Open Access Journal Use

In my mind, this type of platform can easily and usefully be used for publishing open access journal articles. In fact, one could use the platform to self-publish journal articles and leave them open to ongoing peer review. Sadly at present, there seems to be only a small handful of examples on the site, including a PLOS ONE article, which will give a reasonable example of some of the functionality which is possible.  Any author could annotate and footnote their own article as well as include a wealth of photos, graphs, and tables giving a much more multimedia view into their own work.  Following this any academic with an account could also annotate the text with questions, problems, suggestions and all of these can be voted up or down as well as be remedied within the text itself. Other articles can also have the ability to directly cross-reference specific sections of previously posted articles.

Individual labs or groups with “journal clubs” could certainly join in the larger public commentary and annotation on a particular article, but higher level administrative accounts within the system can also create a proverbial clean slate on an article and allow members to privately post up their thoughts and commentaries which are then closed to the group and not visible to the broader public. (This type of functionality can be useful for Mrs. Smith’s 10th grade class annotating The Great Gatsby so that they’re not too heavily influenced by the hundreds or possibly thousands of prior comments within a given text as they do their own personal close readings.) One may note that some of this type of functionality can already be seen in competitive services like Mendeley, but the Rap Genius platform seems to take the presentation and annotation functionalities to the next level. For those with an interest in these types of uses, I recommend Mendeley’s own group: Reinventing the Scientific Paper.

A Rap Genius representative indicated they were pursuing potential opportunities with JSTOR that might potentially expand on these types of opportunities.

Publishing

Like many social media related sites including platforms like WordPress, Tumblr, and Twitter, Rap Genius gives it’s users the ability to self-publish almost any type of content. I can see some excellent cross-promotional opportunities with large MOOC-type classes and the site. For example, professors/teachers who have written their own custom textbooks for MOOCs (eg. Keith Devlin’s Introduction to Mathematical Thinking course at Stanford via Coursera) could post up the entire text on the Poetry Genius site and use it not only to correct mistakes/typos and make improvements over time, but they can use it to discover things which aren’t clear to students who can make comments, ask questions, etc. There’s also the possibility that advanced students can actively help make portions clear themselves when there are 10,000+ students and just 1-2 professors along with 1-2 teaching assistants. Certainly either within or without the MOOC movement, this type of annotation set up may work well to allow authors to tentatively publish, edit, and modify their textbooks, novels, articles, journal articles, monographs, or even Ph.D. theses. I’m particularly reminded of Kathleen Fitzpatrick’s open writing/editing of her book Planned Obsolescence via Media Commons. Academics could certainly look at the Rap Genius platform as a simpler more user-friendly version of this type of process.

Other Uses

I’m personally interested in being able to annotate science and math related articles and have passed along some tips for the Rap Genius team to include functionality like mathjax to be able to utilize Tex/LaTeX related functionality for typesetting mathematics via the web in the future.

Naturally, there are a myriad of other functionalities that can be built into this type of platform – I’m personally waiting for a way to annotate episodes of “The Simpsons”, so I can explain all of the film references and in-jokes to friends who laugh at their jokes, but never seem to know why – but I can’t write all of them here myself.

Interested users can easily sign up for a general Rap Genius account and dig right into the interface.  Those interested in education-specific functionality can request to be granted an “Educator Account” within the Rap Genius system to play around with the additional functionality available to educators. Every page in the system has an “Education” link at the top for further information and details. There’s also an Educator’s Forum [requires free login] for discussions relating specifically to educational use of the site.

Are there particular (off-label) applications you think you might be able to use the Rap Genius platform for? Please add your comments and thoughts below.

A Modest Proposal for Engineering Better and Faster Fast Food Consumption

Fast Food in America

America is well known for its fast food culture. So well known, in fact, that it may only be second to its best-in-class health care, phenomenal education system, and overall can-do attitude. Rarely does a day go by without one seeing or hearing a few disparaging words from the mainstream media about what we choose to put into our mouths and whether those items become lodged permanently in some cases. A Google search begun with the first letters “ob…” immediately has Google guessing what we want and prompts a potential search not just for “obesity” but for the very specific phrase “obesity in America”§ and the resultant search displays just under 73 million results in about half a second.

Our obsession with fast food is legendary. Books are written about the subject, movies are made, and we support a multi-billion dollar fast food industry. But how much time do we individually spend really thinking about what we’re doing? The answer hinges on one of our favorite pastimes and is one in which the root of our obesity problem sprouts: “laziness.” (For those incapable of doing the work of thinking for themselves and who just want the quick answer to the previous question given to them, it’s: “none”.)

“Americanizing” your Fast Food Experience with Some Simple Engineering

Given that we love our fast food so much that we can’t even be bothered with thinking about it for a few minutes (otherwise how does a book entitled Wheat Belly become a best seller and major fad?), I’m always surprised that the simple engineering concept which follows isn’t more widely known. If it were, it would be right at home in our gourmand, “have-it-your-way, right-away” culture.

The simple idea follows:

In some fast food restaurants (think Burger King and In-n-Out), instead of (or in addition to) the ubiquitous ketchup packet, they allow you to fill your own container with the condiment of your choice.  But what container do they provide you with? Obviously, in keeping with the assembly line beauty and grace of our ultra-modern food manufacturing empire and our disposable home furnishings industry, it’s something simple, something very cheap, and something immediately disposable: the small paper cup! (Even legal departments could get behind this one – as long as the industry wasn’t putting any hot beverages into it, and, in part, because the patent protection had expired.)

Standard empty ketchup condiment container on left juxtaposed with same container full of ketchup on the right hand side.

But it’s no ordinary paper cup! It’s an honest-to-goodness feat of American ingenuity and engineering design! (At least from a time when America had those things – you remember… way back before we gave them up for the improved qualities like laziness and obesity. And everyone knows the American engineering motto: “Quality is Job !”)

Standard ketchup condiment container "spread out" on left juxtaposed with modified container full of ketchup on the right hand side.
A feat of American ingenuity! (Who cares if these are called souffle cups?)

This high quality paper cup has pleats! And with a small bit of pulling around the edges of the cup, it opens right up – or “blooms” if you will.

In this process, the top edge of the cup comes down just a tad, but in exchange, the sides expand out toward the horizon in glorious near-infinite beauty. This simple effect allows one to put a significantly larger quantity of ketchup into it–particularly because the ketchup has such a high viscosity! (While I’m thinking about it has anyone considered liquefying ketchup so we could just drink it out of our big gulp cups? Maybe a French fry shake with ketchup blended in to make things easier all around?)

The Benefits of our Engineering Trick

“But it takes so much time and energy to expand out the sides of my cheap paper cup! Why should I bother?”

I know many of you are asking yourself this question because in a rapidly evolving and improving society it’s often the dichotomy of American life to maintain the status quo.  This simple expansion procedure allows you the following clear benefits:

  • You can put a lot more ketchup onto your plate and therefore ultimately into your gullet. Besides, everyone in America knows “Bigger is Better!” right? Why fill up two or three of these small cups, when one big expanded one will do? Or better yet, three big ones! (Let’s not forget our gourmand cultural heritage.)
  • It makes it easier to carry a  lot more ketchup in fewer trips from the condiment bar to your table. American pride in concepts like capitalism and increased efficiency at all costs dictates that we take fewer trips. The reduced amount of exercise is also a positive side-benefit here.
  • It makes ketchup easier to share. (I know this sharing concept is antithetical to the current American ethos, but maybe someone from one of those poor countries outside of America might be reading this? Maybe it’s a strong enough idea to quell the strife in Ukraine right now?)  No more approaching the cup at excessively steep angles to get your fries into it.  Now you can approach from a lower angle with your fat fist-full-of-fries and still hit your target.
  • Not only can you now dunk your fries, but you can actually dunk your majestic hamburger! Why waste time trying to open up that ketchup packet and squeeze some on while you’re making the effort to balance your heavy burger in your other hand? Just smash it into the ketchup and then smash it into your face! “Yipee-ki-yay Mother French Fry!”
  • Those suffering from diabetic retinopathy, glaucoma, and cataracts no longer have to worry about being able to get their French fry into such a tiny paper cup anymore, the size of the target is now bigger by almost an order of magnitude.
  • Use of these paper cups helps to support the American paper goods industry which churns out highly recyclable products which also have the benefit of being Green and therefore unquestioningly good for the environment. No one knows what those alternate ketchup packets are manufactured from or if they’re recyclable or not. Some fabricated laboratory studies indicate some of those packets may have heavy metals in them, which we all know are mined/sourced primarily in China.
  • And perhaps best of all, in the true spirit of America largess – there’s huge return for a very little effort! Everyone is looking for a get-rich-quick-scheme which doesn’t involve actual work, right? This is the closest you’re likely to come to it, and my friends who know a thing or two about the second law of thermodynamics agree. In fact, it might even qualify for the ethereal and long-fabled “free lunch” because, hey, most restaurants aren’t going to charge you for condiments are they?

Ketchup and the Economy

I have a deep, abiding suspicion that far too many Americans haven’t been taking advantage of these pleats in their condiment cups, and that, in fact, the marginal utility lost in manufacturing the extra unused paper when this isn’t done is very likely the root cause of the world economic crisis which began in 2008. The plummeting American efficiency numbers just weighed too heavily on our economy, but that’s a longer and more analytical story than I have space or phony facts to back up with here. (If you’re a talking head political pundit on a major cable news network, call my publicist and let’s talk.) Needless to say, if we can work this simple trick into the second grade core curriculum, I think our long term efficiency numbers will perk up and the savings realized could mean saving the beleaguered Social Security program until at least 2079.

Standard ketchup condiment container on left juxtaposed with modified container full of ketchup on the right hand side.
Super Size Me!

Footnotes:

§ Obamacare was a close second.

† I was too busy lounging on my couch watching Diners, Drive-ins, and Dives on TV and eating a bag of Doritos and Twizzlers to come up with other examples like Supersize Me.

♦ Former Federal Reserve Chairman Alan Greenspan admits almost as much in his book The Map and the Territory: Risk, Human Nature, and the Future of Forecasting (Penguin Press, 2013) where he indicates real estate as a leading cause of the downturn. Each of these condiment cups has a square inch of space hiding in its pleats and when multiplied over tens of thousands of cups per fast food location multiplied by thousands of fast food locations in any given year it becomes a lot of real estate rapidly, and the effect can become crippling.

‡ This also reminds me of a treatise I was reading last week called a Modest Proposal written by a political hack/wannabe writer named Swift. It wasn’t the sharpest thing I’ve heard recently, but with a few tweaks, I think his idea could make a huge dent in third world hunger and poverty and speed us along towards the goal of realizing Soylent Green in the marketplace.

To Understand God’s Thought…

Florence Nightingale, OM, RRC (1820-1910), English social reformer and statistician, founder of modern nursing, renaissance woman
in Florence Nightingale’s Wisdom, New York Times, 3/4/14

 

Florence Nightingale developed the polar pie chart to depict mortality causes in the Crimean War.
Florence Nightingale developed the polar pie chart to depict mortality causes in the Crimean War.

 

Workshop on Information Theoretic Incentives for Artificial Life

For those interested in the topics of information theory in biology and artificial life, Christoph SalgeGeorg MartiusKeyan Ghazi-Zahedi, and Daniel Polani have announced a Satellite Workshop on Information Theoretic Incentives for Artificial Life at the 14th International Conference on the Synthesis and Simulation of Living Systems (ALife 2014) to be held at the Javits Center, New York, on July 30 or 31st.

ALife2014 Banner

Their synopsis states:

Artificial Life aims to understand the basic and generic principles of life, and demonstrate this understanding by producing life-like systems based on those principles. In recent years, with the advent of the information age, and the widespread acceptance of information technology, our view of life has changed. Ideas such as “life is information processing” or “information holds the key to understanding life” have become more common. But what can information, or more formally Information Theory, offer to Artificial Life?

One relevant area is the motivation of behaviour for artificial agents, both virtual and real. Instead of learning to perform a specific task, informational measures can be used to define concepts such as boredom, empowerment or the ability to predict one’s own future. Intrinsic motivations derived from these concepts allow us to generate behaviour, ideally from an embodied and enactive perspective, which are based on basic but generic principles. The key questions here are: “What are the important intrinsic motivations a living agent has, and what behaviour can be produced by them?”

Related to an agent’s behaviour is also the question on how and where the necessary computation to realise this behaviour is performed. Can information be used to quantify the morphological computation of an embodied agent and to what degree are the computational limitations of an agent influencing its behaviour?

Another area of interest is the guidance of artificial evolution or adaptation. Assuming it is true that an agent wants to optimise its information processing, possibly obtain as much relevant information as possible for the cheapest computational cost, then what behaviour would naturally follow from that? Can the development of social interaction or collective phenomena be motivated by an informational gradient? Furthermore, evolution itself can be seen as a process in which an agent population obtains information from the environment, which begs the question of how this can be quantified, and how systems would adapt to maximise this information?

The common theme in those different scenarios is the identification and quantification of driving forces behind evolution, learning, behaviour and other crucial processes of life, in the hope that the implementation or optimisation of these measurements will allow us to construct life-like systems.

Details for submissions, acceptances, potential talks, and dates can be found via  Nihat Ay’s Research Group on Information Theory of Cognitive Systems. For more information on how to register, please visit the ALife 2014 homepage. If there are any questions, or if you just want to indicate interest in submitting or attending, please feel free to mail them at itialife@gmail.com.

According to their release, the open access journal Entropy will sponsor the workshop by an open call with a special issue on the topic of the workshop. More details will be announced to emails received via itialife@gmail.com and over the alife and connectionists mailing lists.

Information Theory is Something Like the Logarithm of Probability Theory

Dr. Daniel Polani, reader in Artificial Life, University of Hertfordshire
in “Research Questions”

 

Not only a great quote, but an interesting way to view the subjects.

Information Theory and Paleoanthropology

A few weeks ago I had communicated a bit with paleoanthropologist John Hawks.  I wanted to take a moment to highlight the fact that he maintains an excellent blog primarily concerning his areas of research which include anthropology, genetics and evolution.  Even more specifically, he is one of the few people in these areas with at least a passing interest in the topic of information theory as it relates to his work. I recommend everyone take a look at his information theory specific posts.

silhouette of John Hawks from his blog

I’ve previously written a brief review of John Hawks’ (in collaboration with Anthony Martin) “Major Transitions in Evolution” course from The Learning Company as part of their Great Courses series of lectures. Given my interest in the MOOC revolution in higher education, I’ll also mention that Dr. Hawks has recently begun a free Coursera class entitled “Human Evolution: Past and Future“. I’m sure his current course focuses more on the area of human evolution compared with the prior course which only dedicated a short segment on this time period.  Given Hawks’ excellent prior teaching work, I’m sure this will be of general interest to readers interested in information theory as it relates to evolution, biology, and big history.

I’d love to hear from others in the area of anthropology who are interested in information theoretical applications.

 

Why a Ph.D. in Physics is Worse Than Drugs

Jonathan I. Katz, Professor of Physics, Washington University, St. Louis, Mo.
in “Don’t Become a Scientist!”

 

In the essay, Dr. Katz provides a bevy of solid reasons why one shouldn’t become a researcher.  I highly recommend everyone read it and then carefully consider how we can turn these problems around.

Editor’s Note: The original article has since been moved to another server.

How might we end the war against science in America?