Dr. Mike Miller, who had previously announced a two quarter sequence of classes on Lie Groups at UCLA, has just opened up registration for the second course in the series. His courses are always clear, entertaining, and invigorating, and I highly recommend them to anyone who is interested in math, science, or engineering.
Philosophy is written in this grand book, the universe which stands continually open to our gaze. But the book cannot be understood unless one first learns to comprehend the language and read the letters in which it is composed. It is written in the language of mathematics, and its characters are triangles, circles and other geometric figures without which it is humanly impossible to understand a single word of it; without these, one wanders about in a dark labyrinth.
Galileo Galilee (1564–1642) in Il saggiatore (The assayer)
Prior to the first part of the course, I’d written some thoughts about the timbre and tempo of his lecture style and philosophy and commend those interested to take a peek. I also mentioned some additional resources for the course there as well. For those who missed the first portion, I’m happy to help fill you in and share some of my notes if necessary. The recommended minimum prerequisites for this class are linear algebra and some calculus.
Introduction to Lie Groups and Lie Algebras (Part 2)
Math X 450.7 / 3.00 units / Reg. # 251580W
Professor: Michael Miller, Ph.D.
Start Date: January 13, 2015
Location: UCLA, 5137 Math Sciences Building
Tuesday, 7-10pm
January 13 – March 24
11 meetings total Class will not meet on one Tuesday to be annouced.
A Lie group is a differentiable manifold that is also a group for which the product and inverse maps are differentiable. A Lie algebra is a vector space endowed with a binary operation that is bilinear, alternating, and satisfies the so-called Jacobi identity. This course is the second in a 2-quarter sequence that offers an introductory survey of Lie groups, their associated Lie algebras, and their representations. Its focus is split between continuing last quarter’s study of matrix Lie groups and their representations and reconciling this theory with that for the more general manifold setting. Topics to be discussed include the Weyl group, complete reducibility, semisimple Lie algebras, root systems, and Cartan subalgebras. This is an advanced course, requiring a solid understanding of linear algebra, basic analysis, and, ideally, the material from the previous quarter.Internet access required to retrieve course materials.
Editor’s note: On 12/12/17 Storify announced they would be shutting down. As a result, I’m changing the embedded version of the original data served by Storify for an HTML copy which can be found below:
I’m giving a short 30-minute talk at a workshop on Biological and Bio-Inspired Information Theory at the Banff International Research Institute. I’ll say more about the workshop later, but here’s my talk: * Biodiversity, entropy and thermodynamics. Most of the people at this workshop study neurobiology and cell signalling, not evolutionary game theory or…
I’m having a great time at a workshop on Biological and Bio-Inspired Information Theory in Banff, Canada. You can see videos of the talks online. There have been lots of good talks so far, but this one really blew my mind: * Naftali Tishby, Sensing and acting under information constraints—a principled approach to biology and…
John Harte is an ecologist who uses maximum entropy methods to predict the distribution, abundance and energy usage of species. Marc Harper uses information theory in bioinformatics and evolutionary game theory. Harper, Harte and I are organizing a workshop on entropy and information in biological systems, and I’m really excited about it!
John Harte, Marc Harper and I are running a workshop! Now you can apply here to attend: * Information and entropy in biological systems, National Institute for Mathematical and Biological Synthesis, Knoxville Tennesee, Wednesday-Friday, 8-10 April 2015. Click the link, read the stuff and scroll down to “CLICK HERE” to apply.
There will be a 5-day workshop on Biological and Bio-Inspired Information Theory at BIRS from Sunday the 26th to Friday the 31st of October, 2014. It’s being organized by * Toby Berger (University of Virginia) * Andrew Eckford (York University) * Peter Thomas (Case Western Reserve University) BIRS is the Banff International Research Station,…
How does it feel to (co-)write a book and hold the finished product in your hands? About like this: Many, many thanks to my excellent co-authors, Tadashi Nakano and Tokuko Haraguchi, for their hard work; thanks to Cambridge for accepting this project and managing it well; and thanks to Satoshi Hiyama for writing a nice blurb.
You may have seen our PLOS ONE paper about tabletop molecular communication, which received loads of media coverage. One of the goals of this paper was to show that anyone can do experiments in molecular communication, without any wet labs or expensive apparatus.
John Selden (1584-1654), English jurist and a scholar
in Illustrations (1612), a commentary on Poly-Olbion, a poem by Michael Drayton
in the margin next to ‘A table to the chiefest passages, in the Illustrations, which, worthiest of observation, are not directed unto by the course of the volume.’
In recent years, ideas such as “life is information processing” or “information holds the key to understanding life” have become more common. However, how can information, or more formally Information Theory, increase our understanding of life, or life-like systems?
Information Theory not only has a profound mathematical basis, but also typically provides an intuitive understanding of processes, such as learning, behavior and evolution terms of information processing.
In this special issue, we are interested in both:
the information-theoretic formalization and quantification of different aspects of life, such as driving forces of learning and behavior generation, information flows between neurons, swarm members and social agents, and information theoretic aspects of evolution and adaptation, and
the simulation and creation of life-like systems with previously identified principles and incentives.
Topics with relation to artificial and natural systems:
information theoretic intrinsic motivations
information theoretic quantification of behavior
information theoretic guidance of artificial evolution
information theoretic guidance of self-organization
information theoretic driving forces behind learning
information theoretic driving forces behind behavior
information theory in swarms
information theory in social behavior
information theory in evolution
information theory in the brain
information theory in system-environment distinction
information theory in the perception action loop
information theoretic definitions of life
Submission
Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. Papers will be published continuously (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.
Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are refereed through a peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed Open Access monthly journal published by MDPI.
Deadline for manuscript submissions: 28 February 2015
Special Issue Editors
Guest Editor Dr. Christoph Salge
Adaptive Systems Research Group,University of Hertfordshire, College Lane, AL10 9AB Hatfield, UK
Website: http://homepages.stca.herts.ac.uk/~cs08abi
E-Mail: c.salge@herts.ac.uk
Phone: +44 1707 28 4490 Interests: Intrinsic Motivation (Empowerment); Self-Organization; Guided Self-Organization; Information-Theoretic Incentives for Social Interaction; Information-Theoretic Incentives for Swarms; Information Theory and Computer Game AI
Guest Editor Dr. Georg Martius
Cognition and Neurosciences, Max Planck Institute for Mathematics in the Sciences Inselstrasse 22, 04103 Leipzig, Germany
Website: http://www.mis.mpg.de/jjost/members/georg-martius.html
E-Mail: martius@mis.mpg.de
Phone: +49 341 9959 545 Interests: Autonomous Robots; Self-Organization; Guided Self-Organization; Information Theory; Dynamical Systems; Machine Learning; Neuroscience of Learning; Optimal Control
Guest Editor Dr. Keyan Ghazi-Zahedi
Information Theory of Cognitive Systems, Max Planck Institute for Mathematics in the Sciences Inselstrasse 22, 04103 Leipzig, Germany
Website: http://personal-homepages.mis.mpg.de/zahedi
E-Mail: zahedi@mis.mpg.de
Phone: +49 341 9959 535 Interests: Embodied Artificial Intelligence; Information Theory of the Sensorimotor Loop; Dynamical Systems; Cybernetics; Self-organisation; Synaptic plasticity; Evolutionary Robotics
Guest Editor Dr. Daniel Polani
Department of Computer Science, University of Hertfordshire, Hatfield AL10 9AB, UK
Website: http://homepages.feis.herts.ac.uk/~comqdp1/
E-Mail: d.polani@herts.ac.uk Interests: artificial intelligence; artificial life; information theory for intelligent information processing; sensor Evolution; collective and multiagent systems
At the end of April, I read an article entitled “In the Margins” in the Johns Hopkins University Arts & Sciences magazine. I was particularly struck by the comments of eminent scholar Jacques Neefs on page thirteen (or paragraph 20) about computers making marginalia a thing of the past:
Neefs believes contemporary literature is losing a valuable component in an age when technology often precludes and trumps the need to save manuscripts or rough drafts. But it is not something that keeps him up at night. ‘The modern technique of computers and everything makes [marginalia] a thing of the past,’ he says. ‘There’s a new way of creation. Some would say it’s tragic, but something new has been invented. I don’t consider it tragic. There are still great writers who write and continue to have a way to keep the process.’
Jacques Neefs (Image courtesy of Johns Hopkins University)
I actually think that he may be completely wrong and that current technology actually allows us to keep far more marginalia! (Has anyone heard of digital exhaust?) The bigger issue may be that many writers just don’t know how to keep a better running log of their work to maintain all the relevant marginalia they’re actually producing. (Of course there’s also the subsequent broader librarian’s “digital dilemma” of maintaining formats for the future. As an example, thing about how easy or hard it might be for you to read that ubiquitous 3.5 inch floppy disk you used in 1995.)
A a technologist who has spent many years in the entertainment industry, I feel compelled to point everyone towards the concept of revision control (or version control) within the realm of computer science. Though it’s primarily used in tracking changes in computer programs and is often a tool used by large teams of programmers, it can very easily be used for tracking changes in almost any type of writing from novels, short stories, screenplays, legal contracts, or any type of textual documentation of nearly any sort.
Example Use Cases for Revision Control
Publishing
As a direct example, I’m using what is known as a Git repository to track every change I make in a textbook I’m currently writing. I can literally go back and view every change I’ve made since beginning the project, so though I’m directly revising one (or more) text files, all of my “marginalia” and revisions are saved and available. Currently I’m only doing it for my own reference and for additional backup not supposing that anyone other than myself or an editor possibly may want to ever peruse it. If I was working in conjunction with otheres, there are ways for me to track the changes, edits, or notes that others (perhaps an editor or collaborator) might make.
In addition to the general back-up of the project (in case of catastrophic computer failure), I also have the ability to go back and find that paragraph (or multiple pages) I deleted last week in haste, but realize that I desperately want them back now instead of having to recreate them de n0vo.
Because it’s all digital, future scholars also won’t have problems parsing my handwriting issues as has occasionally come up in differentiating Mary Shelley’s writing from that of her husband in digital projects like the Shelley Godwin Archive. The fact that all changes are tracked and placed in a tree-like structure will indicate who wrote what and when and will indicate which changes were ultimately accepted and merged into the final version.
Screenplays in Hollywood
One particular use case I can easily see for such technology is tracking changes in screenplays over time. I’m honestly shocked that every production company or even more likely studios don’t use such technology to follow changes in drafts over time. In the end, doing such tracking will certainly make Writers Guild of America (WGA) arbitrations much easier as literally every contribution to a script can be tracked to give screenwriters appropriate credit. The end results with the easy ability to time-machine one’s way back into older drafts is truly lovely, and the outputs give so much more information about changes in the script compared to the traditional and all-too-simple (*) which screenwriters use to indicate that something/anything changed on a specific line or the different colored pages which are used on scripts during production.
I can also picture future screenwriters using services like GitHub as platforms for storing and distributing their screenplays to potential agents, managers, and producers.
Redlining Legal Documents
Having seen thousands of legal agreements go back and forth over the years, revision control is a natural tool for tracking the redlining and changes of legal documents as they change over time before they are finally (or even never) executed. I have to imagine that being able to abstract out the appropriate metadata in the long run may actually help attorneys, agents, etc. to become better negotiators, but something like this is a project for another day.
Academia
In addition to direct research for projects being undertaken by academics like Neefs, academics should look into using revision control in their own daily work and writings. While writing a book, paper, journal article, essay, monograph, etc. (or graduate students writing theses) one could use their own Git repository to not only save but to back up all of their own work not only for themselves primarily, but also future scholars who come later who would not otherwise have access to the “marginalia” one creates while manufacturing their written thoughts in digital form.
I can easily picture Git as a very simple “next step” in furthering the concept of the digital humanities as well as in helping to bridge the gap between C.P. Snow’s “two cultures.” (I’d also suggest that revision control is a relatively simple step one could take before learning a particular programming language, which I think should be a mandatory tool in everyone’s daily toolbox regardless of their field(s) of interest.)
Start Using Revision Control
“But how do I get started?” you ask.
Know going in that it may take parts of a day to get things set up and running, but once you’ve started with the basics, things are actually pretty easy and you can continue to learn the more advanced subtleties as you progress. Once things are working smoothly, the additional overhead you’ll be expending won’t be too much more than the old method of hitting Alt-S to save one of your old Word documents in the time before auto-save became ubiquitous.
First one should start by choosing one of the myriad revision control systems that exist. For the sake of brevity in this short introductory post, I’ll simply suggest that users take a very close look at Git because of its ubiquity and popularity in the computer science world and the fact that it includes a tremendously large amount of free information and support from a variety of sites on the internet. Git also has the benefit of having versions for all major operating systems (Windows, MacOS, and Linux). Git also has the benefit of a relatively long and robust life within the computer science community meaning that it’s very stable and has many more resources for the uninitiated to draw upon.
Once one has Git installed on their computer and has begun using it, I’d then recommending linking one’s local copy of the repository to a cloud storage solution like either GitHub or BitBucket. While GitHub is certainly one of the most popular Git-related services out there (because it acts, in part, as the hub for a large portion of the open internet and thus promotes sharing), I often recommend using BitBucket as it allows free unlimited private but still share-able repositories while GitHub requires a small subscription fee for keeping one’s work private. Having a repository in the cloud will help tremendously in that your work will be available and downloadable from almost anywhere and because it also serves as a de-facto back-up solution for your work.
I’ve recently been playing around with version control to help streamline the writing/editing process for a book I’ve been writing. Though Git and it’s variants probably seem more daunting than they should to the everyday user, they really represent a very powerful tool. I’ve spent less than two days learning the basics of both Git and hosted repositories (GitHub and Bitbucket), and it has been more than well worth the minor effort.
There is a huge wealth of information on revision control in general and on installing and using Git available on the internet, including full textbooks. For the complete beginners, I’d recommend starting with The Chronicle’s “A Gentle Introduction to Version Control.” Keep in mind that though some of these resources look highly technical, it’s because many are trying to enumerate every function one could potentially desire, when even just the basic core functionality is more than enough to begin with. (I could analogize it to learning to drive a car versus actually reading the full manual so that you know how to take the engine apart and put it back together from scratch. To start with revision control, you only need to learn to “drive.”) Professors might also avail themselves of the use of their local institutional libraries which may host small sessions on learning such tools, or they might avail themselves of the help of their colleagues or students in the computer science department. For others, I’d recommend taking a look at Git’s primary website. BitBucket has an excellent step-by-step tutorial (and troubleshooting) for setting up the requisite software and using it.
What do you use for revision control?
I’ll welcome any thoughts, experiences, or additional resources one might want to share with others in the comments.
…I hope that in addition there will be some readers who will simply take pleasure in a mathematical journey toward a high level of sophistication. There are many who would enjoy this trip, just as there are many who might enjoy listening to a symphony with a clear melodic line.
As many may know or have already heard, Dr. Mike Miller, a retired mathematician from RAND and long-time math professor at UCLA, is offering a course on Introduction to Lie Groups and Lie Algebras this fall through UCLA Extension. Whether you’re a professional mathematician, engineer, physicist, physician, or even a hobbyist interested in mathematics you’ll be sure to get something interesting out of this course, not to mention the camaraderie of 20-30 other “regulars” with widely varying backgrounds (actors to surgeons and evolutionary theorists to engineers) who’ve been taking almost everything Mike has offered over the years (and yes, he’s THAT good — we’re sure you’ll be addicted too.)
“Beginners” Welcome!
Even if it’s been years since you last took Calculus or Linear Algebra, Mike (and the rest of the class) will help you get quickly back up to speed to delve into what is often otherwise a very deep subject. If you’re interested in advanced physics, quantum mechanics, quantum information or string theory, this is one of the topics that is de rigueur for delving in deeply and being able to understand them better. The topic is also one near and dear to the hearts of those in robotics, graphics, 3-D modelling, gaming, and areas utilizing multi-dimensional rotations. And naturally, it’s simply a beautiful and elegant subject for those who have no need to apply it to anything, but who just want to meander their way through higher mathematics for the fun of it (this will comprise the largest majority of the class by the way.)
Whether you’ve been away from serious math for decades or use it every day or even if you’ve never gone past Calculus or Linear Algebra, this is bound to be the most entertaining thing you can do with your Tuesday nights in the fall. If you’re not sure what you’re getting into (or are scared a bit by the course description), I highly encourage to come and join us for at least the first class before you pass up on the opportunity. I’ll mention that the greater majority of new students to Mike’s classes join the ever-growing group of regulars who take almost everything he teaches subsequently. (For the reticent, I’ll mention that one of the first courses I took from Mike was Algebraic Topology which generally requires a few semesters of Abstract Algebra and a semester of Topology as prerequisites. I’d taken neither of these prerequisites, but due to Mike’s excellent lecture style and desire to make everything comprehensible, I was able to do exceedingly well in the course.) I’m happy to chat with those who may be reticent. Also keep in mind that you can register to take the class for a grade, pass/fail, or even no grade at all to suit your needs/lifestyle.
My classes have the full spectrum of students from the most serious to the hobbyist to those who are in it for the entertainment and ‘just enjoy watching it all go by.’
Mike Miller, Ph.D.
As a group, some of us have a collection of a few dozen texts in the area which we’re happy to loan out as well. In addition to the one recommended text (Mike always gives such comprehensive notes that any text for his classes is purely supplemental at best), several of us have also found some good similar texts:
Given the breadth and diversity of the backgrounds of students in the class, I’m sure Mike will spend some reasonable time at the beginning [or later in the class, as necessary] doing a quick overview of some linear algebra and calculus related topics that will be needed later in the quarter(s).
Further information on the class and a link to register can be found below. If you know of others who might be interested in this, please feel free to forward it along – the more the merrier.
I hope to see you all soon.
Introduction to Lie Groups and Lie Algebras
MATH X 450.6 / 3.00 units / Reg. # 249254W
Professor: Michael Miller, Ph.D.
Start Date: 9/30/2014
Location UCLA: 5137 Math Sciences Building
Tuesday, 7-10pm
September 30 – December 16, 2014
11 meetings total (no mtg 11/11)
Register here: https://www.uclaextension.edu/Pages/Course.aspx?reg=249254
Course Description
A Lie group is a differentiable manifold that is also a group for which the product and inverse maps are differentiable. A Lie algebra is a vector space endowed with a binary operation that is bilinear, alternating, and satisfies the so-called Jacobi identity. This course, the first in a 2-quarter sequence, is an introductory survey of Lie groups, their associated Lie algebras, and their representations. This first quarter will focus on the special case of matrix Lie groups–including general linear, special linear, orthogonal, unitary, and symplectic. The second quarter will generalize the theory developed to the case of arbitrary Lie groups. Topics to be discussed include compactness and connectedness, homomorphisms and isomorphisms, exponential mappings, the Baker-Campbell-Hausdorff formula, covering groups, and the Weyl group. This is an advanced course, requiring a solid understanding of linear algebra and basic analysis.
In all the sadness of the passing of Robin Williams, I nearly forgot I’d “written” a short joke for him just after I’d first moved to Hollywood.
Killing some time just before I started work at Creative Artists Agency, I finagled my way into a rough-cut screening of Robin William’s iconoclastic role in PATCH ADAMS on the Universal Lot. Following the screening, I had the pleasure of chatting with [read: bum-rushed like a crazy fan] Tom Shadyac for a few minutes on the way out. I told him as a recent grad of Johns Hopkins University and having spent a LOT of time in hospitals, that they were missing their obligatory hospital gown joke. But to give it a karate chop (and because I’d just graduated relatively recently), they should put it into the graduation at the “end” and close on a high note.
I didn’t see or hear anything about it until many months later when I went to Mann’s Chinese Theater for the premiere and saw the final cut of the ending of the film, which I’ve clipped below. Just for today, I’m wearing the same red foam clown nose that I wore to the premiere that night.
I recently ran across this TED talk and felt compelled to share it. It really highlights some of my own personal thoughts on how science should be taught and done in the modern world. It also overlaps much of the reading I’ve been doing lately on innovation and creativity. If these don’t get you to watch, then perhaps mentioning that Alon manages to apply comedy and improvisation techniques to science will.
[ted id=2020]
Uri Alon was already one of my scientific heroes, but this adds a lovely garnish.
A most interesting article on The Great Courses (TGC) appeared in the New York Times on Saturday. TGC has been featured in newspaper articles before: scads of articles, in fact, over the last 20-plus years. But those articles (at least the ones I’m aware of and I am aware of most of them) have always focused on the content of TGC offerings: that they are academic courses offered up on audio/video media. This article, written by the Times’ TV critic Neil Genzlinger, is different. It focuses on TGC as a video production company and on TGC courses as slick, professional, high-end television programs.
My goodness, how times have changed.
Long-time readers of this blog will recall my descriptions of TGC in its early days. I would rehash a bit of that if only to highlight the incredible evolution of the company from a startup to the polished gem it is today.
I made my first course back in May of 1993: the first edition of “How to Listen to and Understand Great Music”. We had no “set”; I worked in front of a blue screen (or a “traveling matte”). The halogen lighting created an unbelievable amount of heat and glare. The stage was only about 6 feet deep but about 20 feet wide. With my sheaf of yellow note paper clutched in my left hand, I roamed back-and-forth, in constant motion, teaching exactly the way I did in the classroom. I made no concessions to the medium; to tell the truth, it never occurred to me or my director at the time that we should do anything but reproduce what I did in the classroom. (My constant lateral movement did, however, cause great consternation among the camera people, who were accustomed to filming stationary pundits at CNN and gasbags at C-span. One of our camera-dudes, a bearded stoner who will remain nameless kept telling me “Man . . . I cannot follow you, man. Please, man, please!” He was a good guy though, and offered to “take my edge off” by lighting me up during our breaks. I wisely declined.)
We worked with a studio audience in those days: mostly retirees who were free to attend such recording sessions, many of whom fell asleep in their chairs after lunch or jingled change in their pockets or whose hearing aids started screaming sounds that they could not hear but I most certainly did. Most distracting were the white Styrofoam coffee cups; in the darkened studio their constant (if irregular) up-and-down motion reminded me of the “bouncing ball” from the musical cartoons of the 1930s, ‘40s, and ‘50s.
I could go on (and I will, at some other time), though the point is made: in its earliest days TGC was simply recording more-or-less what you would hear in a classroom or lecture hall. I am reminded of the early days of TV, during which pre-existing modes of entertainment – the variety show, theatrical productions, puppet shows – were simply filmed and broadcast. In its earliest permutation, the video medium did not create a new paradigm so much as record old ones. But this changed soon enough, and the same is true for TGC. Within a few years TGC became a genuine production company, in which style, look, and mode of delivery became as important as the content being delivered. And this is exactly as it should be. Audio and video media demand clarity and precision; the “ahs” and “ums” and garbled pronunciations and mismatched tenses that we tolerate in a live lecture are intolerable in media, because we are aware of the fact that in making media they can (and should) be corrected.
Enough. Read the article. Then buy another TGC course; preferably one of mine. And while watching and/or listening, let us be aware, as best as we can, of the tens-of-thousands of hours that go into making these courses – these productions – the little masterworks that they indeed are.
My response to his post with some thoughts of my own follows:
This is an interesting, but very germane, review. As someone who’s both worked in the entertainment industry and followed the MOOC (massively open online courseware) revolution over the past decade, I very often consider the physical production value of TGCs offerings and have been generally pleased at their steady improvement over time. Not only do they offer some generally excellent content, but they’re entertaining and pleasing to watch. From a multimedia perspective, I’m always amazed at what they offer and that generally the difference between the video versus the audio only versions isn’t as drastic as one might otherwise expect. Though there are times that I think that TGC might include some additional graphics, maps, etc. either in the course itself or in the booklets, I’m impressed that they still function exceptionally well without them.
Within the MOOC revolution, Sue Alcott’s Coursera course Archaeology’s Dirty Little Secrets is still by far the best produced multi-media course I’ve come across. It’s going to take a lot of serious effort for other courses to come up to this level of production however. It’s one of the few courses which I think rivals that of The Teaching Company’s offerings thus far. Unfortunately, the increased competition in the MOOC space is going to eventually encroach on the business model of TGC, and I’m curious to see how that will evolve and how it will benefit students. Will TGC be forced to offer online fora for students to interact with each other the way most MOOCs do? Will MOOCs be forced to drastically increase their production quality to the level of TGC? Will certificates or diplomas be offered for courseware? Will the subsequent models be free (like most MOOCs now), paid like TGC, or some mixture of the two?
One area which neither platform seems to be doing very well at present is offering more advanced coursework. Naturally the primary difficulty is in having enough audience to justify the production effort. The audience for a graduate level topology class is simply far smaller than introductory courses in history or music appreciation, but those types of courses will eventually have to exist to make the enterprises sustainable – in addition to the fact that they add real value to society. Another difficulty is that advanced coursework usually requires some significant work outside of the lecture environment – readings, homework, etc. MOOCs seem to have a slight upper hand here while TGC has generally relied on all of the significant material being offered in a lecture with the suggestion of reading their accompanying booklets and possibly offering supplementary bibliographies. When are we going to start seeing course work at the upper-level undergraduate or graduate level?
The nice part is that with evolving technology and capabilities, there are potentially new pedagogic methods that will allow easier teaching of some material that may not have been possible previously. (For some brief examples, see this post I wrote last week on Latin and the digital humanities.) In particular, I’m sure many of us have been astounded and pleased at how Dr. Greenberg managed the supreme gymnastics of offering of “Understanding the Fundamentals of Music” without delving into traditional music theory and written notation, but will he be able to actually offer that in new and exciting ways to increase our levels of understanding of music and then spawn off another 618 lectures that take us all further and deeper into his exciting world? Perhaps it comes in the form of a multimedia mobile app? We’re all waiting with bated breath, because regardless of how he pulls it off, we know it’s going to be educational, entertaining and truly awe inspiring.
Following my commentary, Scott Ableman, the Chief Marketing Officer for TGC, responded with the following, which I find very interesting:
Chris, all excellent observations (and I agree re Alcott’s course). I hope you’ll be please to learn that the impact of MOOCs, if any, on The Great Courses has been positive, in that there is a rapidly growing awareness and interest in the notion that lifelong learning is possible via digital media. As for differentiating vs. MOOCs, people who know about The Great Courses generally find the differences to be self-evident:
Curation: TGC scours the globe to find the world’s greatest professors;
Formats: The ability to enjoy a course in your car or at home on your TV or on your smartphone, etc.;
Lack of pressure: Having no set schedule and doing things at your own pace with no homework or exams (to be sure, there are some for whom sitting at a keyboard at a scheduled time and taking tests and getting a certificate is quite valuable, but that’s a different audience).
The Great Courses once were the sole claimant to a fourth differentiator, which is depth. Obviously, the proliferation of fairly narrow MOOCs provides as much depth on many topics, and in some cases addresses your desire for higher level courses. Still TGC offers significant depth when compared to the alternatives on TV or audio books. I must say that I was disappointed that Genzlinger chose to focus on this notion that professors these days “don’t know how to lecture.” He suggests that TGC is in the business of teaching bad lecturers how to look good in front of a camera. This of course couldn’t be further from the truth. Anybody familiar with The Great Course knows that among its greatest strengths is its academic recruiting team, which finds professors like Robert Greenberg and introduces them to lifelong learners around the world.
Stephen Greenblatt provides an interesting synthesis of history and philosophy. Greenblatt’s love of the humanities certainly shines through. This stands as an almost over-exciting commercial for not only reading Lucretius’s “De Rerum Natura” (“On the Nature of Things”), but in motivating the reader to actually go out to learn Latin to appreciate it properly.
I would have loved more direct analysis and evidence of the immediate impact of Lucretius in the 1400’s as well as a longer in-depth analysis of the continuing impact through the 1700’s.
The first half of the book is excellent at painting a vivid portrait of the life and times of Poggio Bracciolini which one doesn’t commonly encounter. I’m almost reminded of Stacy Schiff’s Cleopatra: A Life, though Greenblatt has far more historical material with which to paint the picture. I may also be biased that I’m more interested in the mechanics of the scholarship of the resurgence of the classics in the Renaissance than I was of that particular political portion of the first century BCE. Though my background on the history of the time periods involved is reasonably advanced, I fear that Greenblatt may be leaving out a tad too much for the broader reading public who may not be so well versed. The fact that he does bring so many clear specifics to the forefront may more than compensate for this however.
In some interesting respects, this could be considered the humanities counterpart to the more science-centric story of Owen Gingerich’s The Book Nobody Read: Chasing the Revolutions of Nicolaus Copernicus. Though Simon Winchester is still by far my favorite nonfiction writer, Greenblatt does an exceedingly good job of narrating what isn’t necessarily a very linear story.
Greenblatt includes lots of interesting tidbits and some great history. I wish it had continued on longer… I’d love to have the spare time to lose myself in the extensive bibliography. Though the footnotes, bibliography, and index account for about 40% of the book, the average reader should take a reasonable look at the quarter or so of the footnotes which add some interesting additional background an subtleties to the text as well as to some of the translations that are discussed therein.
I am definitely very interested in the science behind textual preservation which is presented as the underlying motivation for the action in this book. I wish that Greenblatt had covered some of these aspects in the same vivid detail he exhibited for other portions of the story. Perhaps summarizing some more of the relevant scholarship involved in transmitting and restoring old texts as presented in Bart Ehrman and Bruce Metzter’s The Text of the New Testament: Its Transmission, Corruption & Restoration would have been a welcome addition given the audience of the book. It might also have presented a more nuanced picture of the character of the Church and their predicament presented in the text as well.
Though I only caught one small reference to modern day politics (a prison statistic for America which was obscured in a footnote), I find myself wishing that Greenblatt had spent at least a few paragraphs or even a short chapter drawing direct parallels to our present-day political landscape. I understand why he didn’t broach the subject as it would tend to date an otherwise timeless feeling text and generally serve to dissuade a portion of his readership and in particular, the portion which most needs to read such a book. I can certainly see a strong need for having another short burst of popularity for “On the Nature of Things” to assist with the anti-science and overly pro-religion climate we’re facing in American politics.
For those interested in the topic, I might suggest that this text has some flavor of Big History in its DNA. It covers not only a fairly significant chunk of recorded human history, but has some broader influential philosophical themes that underlie a potential change in the direction of history which we’ve been living for the past 300 years. There’s also an intriguing overlap of multidisciplinary studies going on in terms of the history, science, philosophy, and technology involved in the multiple time periods discussed.
This review was originally posted on GoodReads.com on 7/8/2014. View all my reviews
I rarely, if ever, reblog anything here, but this particular post from John Baez’s blog Azimuth is so on-topic, that attempting to embellish it seems silly.
Entropy and Information in Biological Systems (Part 2)
John Harte, Marc Harper and I are running a workshop! Now you can apply here to attend:
Click the link, read the stuff and scroll down to “CLICK HERE” to apply. The deadline is 12 November 2014.
Financial support for travel, meals, and lodging is available for workshop attendees who need it. We will choose among the applicants and invite 10-15 of them.
The idea
Information theory and entropy methods are becoming powerful tools in biology, from the level of individual cells, to whole ecosystems, to experimental design, model-building, and the measurement of biodiversity. The aim of this investigative workshop is to synthesize different ways of applying these concepts to help systematize and unify work in biological systems. Early attempts at “grand syntheses” often misfired, but applications of information theory and entropy to specific highly focused topics in biology have been increasingly successful. In ecology, entropy maximization methods have proven successful in predicting the distribution and abundance of species. Entropy is also widely used as a measure of biodiversity. Work on the role of information in game theory has shed new light on evolution. As a population evolves, it can be seen as gaining information about its environment. The principle of maximum entropy production has emerged as a fascinating yet controversial approach to predicting the behavior of biological systems, from individual organisms to whole ecosystems. This investigative workshop will bring together top researchers from these diverse fields to share insights and methods and address some long-standing conceptual problems.
So, here are the goals of our workshop:
To study the validity of the principle of Maximum Entropy Production (MEP), which states that biological systems – and indeed all open, non-equilibrium systems – act to produce entropy at the maximum rate.
To familiarize all the participants with applications to ecology of the MaxEnt method: choosing the probabilistic hypothesis with the highest entropy subject to the constraints of our data. We will compare MaxEnt with competing approaches and examine whether MaxEnt provides a sufficient justification for the principle of MEP.
To clarify relations between known characterizations of entropy, the use of entropy as a measure of biodiversity, and the use of MaxEnt methods in ecology.
To develop the concept of evolutionary games as “learning” processes in which information is gained over time.
To study the interplay between information theory and the thermodynamics of individual cells and organelles.
If you’ve got colleagues who might be interested in this, please let them know. You can download a PDF suitable for printing and putting on a bulletin board by clicking on this:
I’ve been a proponent and user of a variety of mnemonic systems since I was about eleven years old. The two biggest and most useful in my mind are commonly known as the “method of loci” and the “major system.” The major system is also variously known as the phonetic number system, the phonetic mnemonic system, or Hergione’s mnemonic system after French mathematician and astronomer Pierre Hérigone (1580-1643) who is thought to have originated its use.
The major system generally works by converting numbers into consonant sounds and then from there into words by adding vowels under the overarching principle that images (of the words) can be remembered more easily than the numbers themselves. For instance, one could memorize one’s grocery list of a hundred items by associating each shopping item on a numbered list with the word associated with the individual number in the list. As an example, if item 22 on the list is lemons, one could translate the number 22 as “nun” within the major system and then associate or picture a nun with lemons – perhaps a nun in full habit taking a bath in lemons to make the image stick in one’s memory better. Then at the grocery store, when going down one’s list, when arriving at number 22 on the list, one automatically translates the number 22 to “nun” which will almost immediately conjure the image of a nun taking a bath in lemons which gives one the item on the list that needed to be remembered. This comes in handy particularly when one needs to be able to remember large lists of items in and out of order.
The following generalized chart, which can be found in a hoard of books and websites on the topic, is fairly canonical for the overall system:
Mnemonic for remembering the numeral and consonant relationship
0
/s/ /z/
s, z, soft c
“z” is the first letter of zero; the other letters have a similar sound
1
/t/ /d/
t, d
t & d have one downstroke and sound similar (some variant systems include “th”)
2
/n/
n
n has two downstrokes
3
/m/
m
m has three downstrokes; m looks like a “3” on its side
4
/r/
r
last letter of four; 4 and R are almost mirror images of each other
5
/l/
l
L is the Roman Numeral for 50
6
/ʃ/ /ʒ/ /tʃ/ /dʒ/
j, sh, soft g, soft “ch”
a script j has a lower loop; g is almost a 6 rotated
7
/k/ /ɡ/
k, hard c, hard g, hard “ch”, q, qu
capital K “contains” two sevens (some variant systems include “ng”)
8
/f/ /v/
f, v
script f resembles a figure-8; v sounds similar (v is a voiced f)
9
/p/ /b/
p, b
p is a mirror-image 9; b sounds similar and resembles a 9 rolled around
Unassigned
Vowel sounds, w,h,y
w and h are considered half-vowels; these can be used anywhere without changing a word’s number value
There are a variety of ways to use the major system as a code in addition to its uses in mnemonic settings. When I was a youth, I used it to write coded messages and to encrypt a variety of things for personal use. After I had originally read Dr. Bruno Furst’s series of booklets entitled You Can Remember: A Home Study Course in Memory and Concentration1, I had always wanted to spend some time creating an alternate method of writing using the method. Sadly I never made the time to do the project, but yesterday I made a very interesting discovery that, to my knowledge, doesn’t seem to have been previously noticed!
My discovery began last week when I read an article in The Atlantic by journalist Dennis Hollier entitled How to Write 225 Words Per Minute with a Pen: A Lesson in the Lost Technology of Shorthand. 2 In the article, which starts off with a mention of the Livescribe pen – one of my favorite tools, Mr. Hollier outlines the use of the Gregg System of Shorthand which was invented by John Robert Gregg in 1888. The description of the method was intriguing enough to me that I read a dozen additional general articles on shorthand on the internet and purchased a copy of Louis A. Leslie’s two volume text Gregg Shorthand: Functional Method.3
I was shocked, on page x of the front matter, just before the first page of the text, to find the following “Alphabet of Gregg Shorthand”:
Alphabet of Gregg Shorthand
Gregg Shorthand is using EXACTLY the same consonant-type breakdown of the alphabet as the major system!
Apparently I wasn’t the first to have the idea to turn the major system into a system of writing. The fact that the consonant breakdowns for the major system coincide almost directly to those for the shorthand method used by Gregg cannot be a coincidence!
The Gregg system works incredibly well precisely because the major system works so well. The biggest difference between the two systems is that Gregg utilizes a series of strokes (circles and semicircles) to indicate particular vowel sounds which allows for better differentiation of words which the major system doesn’t generally take into consideration. From an information theoretic standpoint, this is almost required to make the coding from one alphabet to the other possible, but much like ancient Hebrew, leaving out the vowels doesn’t remove that much information. Gregg, also like Hebrew, also uses dots and dashes above or below certain letters to indicate the precise sound of many of its vowels.
The upside of all of this is that the major system is incredibly easy to learn and use, and from here, learning Gregg shorthand is just a hop, skip , and a jump – heck, it’s really only just a hop because the underlying structure is so similar. Naturally as with the major system, one must commit some time to practicing it to improve on speed and accuracy, but the general learning of the system is incredibly straightforward.
Because the associations between the two systems are so similar, I wasn’t too surprised to find that some of the descriptions of why certain strokes were used for certain letters were very similar to the mnemonics for why certain letters were used for certain numbers in the major system.
From Dr. Bruno Furst’s “You Can Remember!” The mnemonic for remembering 6, 7, 8, & 9 in the major system.From Louis Leslie’s “Gregg Shorthand: Functional Method” The mnemonic for remembering the strokes for k and g.
One thing I have noticed in my studies on these topics is the occasional references to the letter combinations “NG” and “NK”. I’m curious why these are singled out in some of these systems? I have a strong suspicion that their inclusion/exclusion in various incarnations of their respective systems may be helpful in dating the evolution of these systems over time.
I’m aware that various versions of shorthand have appeared over the centuries with the first recorded having been the “Tironian Notes” of Marcus Tullius Tiro (103-4 BCE) who apparently used his system to write down the speeches of his master Cicero. I’m now much more curious at what point the concepts for shorthand and the major system crossed paths or converged? My assumption would be that it happened in the late Renaissance, but it would be nice to have the underlying references and support for such a timeline. Perhaps it was with Timothy Bright’s publication of Characterie; An Arte of Shorte, Swifte and Secrete Writing by Character (1588) 4, John Willis’s Art of Stenography (1602) 5, Edmond Willis’s An abbreviation of writing by character (1618) 6, or Thomas Shelton’s Short Writing (1626) 7? Shelton’s system was certainly very popular and well know because it was used by both Samuel Pepys and Sir Isaac Newton.
Certainly some in-depth research will tell, though if anyone has ideas, please don’t hesitate to indicate your ideas in the comments.
UPDATE on 7/6/14:
I’m adding a new chart making the correspondence between the major system and Gregg Shorthand more explicit.
A chart indicating the correspondences between the major system and Gregg Shorthand.
References
1.
Furst B. You Can Remember: A Home Study Course in Memory and Concentration. Markus-Campbell Co.; 1965.
Bright T (1550-1615). Characterie; An Arte of Shorte, Swifte and Secrete Writing by Character. 1st ed. I. Windet; reprinted by W. Holmes, Ulverstone; 1588. https://archive.org/details/characteriearteo00brig.
5.
Willis J. Art of Stenography.; 1602.
6.
Willis E. An Abbreviation of Writing by Character.; 1618.
I’ve long been a student of the humanities (and particularly the classics) and have recently begun reviewing over my very old and decrepit knowledge of Latin. It’s been two decades since I made a significant study of classical languages, and lately (as the result of conversations with friends like Dave Harris, Jim Houser, Larry Richardson, and John Kountouris) I’ve been drawn to reviewing them for reading a variety of classical texts in their original languages. Fortunately, in the intervening years, quite a lot has changed in the tools relating to pedagogy for language acquisition.
A copy of Jenny’s Latin text which I had used 20 years ago and recently acquired a new copy for the pittance of $3.25.
Internet
The biggest change in the intervening time is the spread of the internet which supplies a broad variety of related websites with not only interesting resources for things like basic reading and writing, but even audio sources apparently including listening to the nightly news in Latin. There are a variety of blogs on Latin as well as even online courseware, podcasts, pronunciation recordings, and even free textbooks. I’ve written briefly about the RapGenius platform before, but I feel compelled to mention it as a potentially powerful resource as well. (Julius Caesar, Seneca, Ovid, Cicero, et al.) There is a paucity of these sources in a general sense in comparison with other modern languages, but given the size of the niche, there is quite a lot out there, and certainly a mountain in comparison to what existed only twenty years ago.
Software
There has also been a spread of pedagogic aids like flashcard software including Anki and Mnemosyne with desktop, web-based, and even mobile-based versions making learning available in almost any situation. The psychology and learning research behind these types of technologies has really come a long way toward assisting students to best make use of their time in learning and retaining what they’ve learned in long term memory. Simple mobile applications like Duolingo exist for a variety of languages – though one doesn’t currently exist for classical Latin (yet).
Digital Humanities
The other great change is the advancement of the digital humanities which allows for a lot of interesting applications of knowledge acquisition. One particular one that I ran across this week was the Dickinson College Commentaries (DCC). Specifically a handful of scholars have compiled and documented a list of the most common core vocabulary words in Latin (and in Greek) based on their frequency of appearance in extant works. This very specific data is of interest to me in relation to my work in information theory, but it also becomes a tremendously handy tool when attempting to learn and master a language. It is a truly impressive fact that, simply by knowing that if one can memorize and master about 250 words in Latin, it will allow them to read and understand 50% of most written Latin. Further, knowledge of 1,500 Latin words will put one at the 80% level of vocabulary mastery for most texts. Mastering even a very small list of vocabulary allows one to read a large variety of texts very comfortably. I can only think about the old concept of a concordance (which was generally limited to heavily studied texts like the Bible or possibly Shakespeare) which has now been put on some serious steroids for entire cultures. Another half step and one arrives at the Google Ngram Viewer.
The best part is that one can, with very little technical knowledge, easily download the DCC Core Latin Vocabulary (itself a huge research undertaking) and upload and share it through the Anki platform, for example, to benefit a fairly large community of other scholars, learners, and teachers. With a variety of easy-to-use tools, shortly it may be even that much easier to learn a language like Latin – potentially to the point that it is no longer a dead language. For those interested, you can find my version of the shared DCC Core Latin Vocabulary for Anki online; the DCC’s Chris Francese has posted details and a version for Mnemosyne already.
[Editor’s note: Anki’s web service occasionally clears decks of cards from their servers, so if you find that the Anki link to the DCC Core Latin is not working, please leave a comment below, and we’ll re-upload the deck for shared use.]
What tools and tricks do you use for language study and pedagogy?