Machiavelli in Hollywood | Gavin Polone’s ‘Textbook’ on the Entertainment Industry

A series of articles by producer Gavin Polone can serve as an excellent introduction to the business of Hollywood.

Dearth of (Great) Textbooks on The Entertainment Business

In having previously taught several classes on the business of the entertainment industry, I was never quite able to pick out even a mediocre textbook for such a class. There are a handful that will give one an overview of the nuts and bolts and one or two that will provide some generally useful numbers (see the syllabi from those classes), but none comes close to providing the philosophy of how the business works in a short period of time.

A Short Term Solution

To remedy this problem, I was always a fan of producer and ex-agent Gavin Polone, who had a series of articles in New York Magazine/Vulture.  I’ve recently gone through and linked to all of the forty-four articles, in chronological order, he produced in that series from 9/21/11 to 5/7/14.

I’ve aggregated the series via Readlists.com, so one can click on each of the articles individually.  Better yet, for students and teachers alike, one can click on the “export” link and very easily download them all in most ebook formats (including Kindle, iPad, etc.) for your reading/studying convenience.

My hope is that for others, they may create an excellent starter textbook on how the entertainment business works and, more importantly: how successful people in the business think. For those who need more, Gavin is also an occasional contributor to the Hollywood Reporter.  (And, as a note for those not trained in the classics and prone to modern-day stereotypes, I’ll make the caveat that I use the title “Machiavelli” above with the utmost reverence and honor.)

I’m still slowly, but surely making progress on my own all-encompassing textbook, but, until then, I hope others find this series of articles as interesting and useful as I have.

 

Gavin Polone is an agent turned manager turned producer. His production company, Pariah, has brought you such movies and TV shows as Panic Room, Zombieland, Gilmore Girls, and Curb Your Enthusiasm. Follow him on Twitter @gavinpolone

Syndicated copies to:

Git and Version Control for Novelists, Screenwriters, Academics, and the General Public

Revision (or version) control is used in tracking changes in computer programs, but it can easily be used for tracking changes in almost any type of writing from novels, short stories, screenplays, legal contracts, or any type of textual documentation.

Marginalia and Revision Control

At the end of April, I read an article entitled “In the Margins” in the Johns Hopkins University Arts & Sciences magazine.  I was particularly struck by the comments of eminent scholar Jacques Neefs on page thirteen (or paragraph 20) about computers making marginalia a thing of the past:

Neefs believes contemporary literature is losing a valuable component in an age when technology often precludes and trumps the need to save manuscripts or rough drafts. But it is not something that keeps him up at night. ‘The modern technique of computers and everything makes [marginalia] a thing of the past,’ he says. ‘There’s a new way of creation. Some would say it’s tragic, but something new has been invented. I don’t consider it tragic. There are still great writers who write and continue to have a way to keep the process.’

Photo looking over the shoulder of Jacques Neefs onto the paper he's been studing on the table in front of him.
Jacques Neefs (Image courtesy of Johns Hopkins University)

I actually think that he may be completely wrong and that current technology actually allows us to keep far more marginalia! (Has anyone heard of digital exhaust?) The bigger issue may be that many writers just don’t know how to keep a better running log of their work to maintain all the relevant marginalia they’re actually producing. (Of course there’s also the subsequent broader librarian’s “digital dilemma” of maintaining formats for the future. As an example, thing about how easy or hard it might be for you to read that ubiquitous 3.5 inch floppy disk you used in 1995.)

A a technologist who has spent many years in the entertainment industry, I feel compelled to point everyone towards the concept of revision control (or version control) within the realm of computer science.  Though it’s primarily used in tracking changes in computer programs and is often a tool used by large teams of programmers, it can very easily be used for tracking changes in almost any type of writing from novels, short stories, screenplays, legal contracts, or any type of textual documentation of nearly any sort.

Example Use Cases for Revision Control

Publishing

As a direct example, I’m using what is known as a Git repository to track every change I make in a textbook I’m currently writing.  I can literally go back and view every change I’ve made since beginning the project, so though I’m directly revising one (or more) text files, all of my “marginalia” and revisions are saved and available.  Currently I’m only doing it for my own reference and for additional backup not supposing that anyone other than myself or an editor possibly may want to ever peruse it.  If I was working in conjunction with otheres, there are ways for me to track the changes, edits, or notes that others (perhaps an editor or collaborator) might make.

In addition to the general back-up of the project (in case of catastrophic computer failure), I also have the ability to go back and find that paragraph (or multiple pages) I deleted last week in haste, but realize that I desperately want them back now instead of having to recreate them de n0vo.

Because it’s all digital, future scholars also won’t have problems parsing my handwriting issues as has occasionally come up in differentiating Mary Shelley’s writing from that of her husband in digital projects like the Shelley Godwin Archive. The fact that all changes are tracked and placed in a tree-like structure will indicate who wrote what and when and will indicate which changes were ultimately accepted and merged into the final version.

Screenplays in Hollywood

One particular use case I can easily see for such technology is tracking changes in screenplays over time.  I’m honestly shocked that every production company or even more likely studios don’t use such technology to follow changes in drafts over time. In the end, doing such tracking will certainly make Writers Guild of America (WGA) arbitrations much easier as literally every contribution to a script can be tracked to give screenwriters appropriate credit. The end results with the easy ability to time-machine one’s way back into older drafts is truly lovely, and the outputs give so much more information about changes in the script compared to the traditional and all-too-simple (*) which screenwriters use to indicate that something/anything changed on a specific line or the different colored pages which are used on scripts during production.

I can also picture future screenwriters using services like GitHub as platforms for storing and distributing their screenplays to potential agents, managers, and producers.

Redlining Legal Documents

Having seen thousands of legal agreements go back and forth over the years, revision control is a natural tool for tracking the redlining and changes of legal documents as they change over time before they are finally (or even never) executed. I have to imagine that being able to abstract out the appropriate metadata in the long run may actually help attorneys, agents, etc. to become better negotiators, but something like this is a project for another day.

Academia

In addition to direct research for projects being undertaken by academics like Neefs, academics should look into using revision control in their own daily work and writings.  While writing a book, paper, journal article, essay, monograph, etc. (or graduate students writing theses) one could use their own Git repository to not only save but to back up all of their own work not only for themselves primarily, but also future scholars who come later who would not otherwise have access to the “marginalia” one creates while manufacturing their written thoughts in digital form.

I can easily picture Git as a very simple “next step” in furthering the concept of the digital humanities as well as in helping to bridge the gap between C.P. Snow’s “two cultures.” (I’d also suggest that revision control is a relatively simple step one could take before learning a particular programming language, which I think should be a mandatory tool in everyone’s daily toolbox regardless of their field(s) of interest.)

Git Logo

Start Using Revision Control

“But how do I get started?” you ask.

Know going in that it may take parts of a day to get things set up and running, but once you’ve started with the basics, things are actually pretty easy and you can continue to learn the more advanced subtleties as you progress.  Once things are working smoothly, the additional overhead you’ll be expending won’t be too much more than the old method of hitting Alt-S to save one of your old Word documents in the time before auto-save became ubiquitous.

First one should start by choosing one of the myriad revision control systems that exist.  For the sake of brevity in this short introductory post, I’ll simply suggest that users take a very close look at Git because of its ubiquity and popularity in the computer science world and the fact that it includes a tremendously large amount of free information and support from a variety of sites on the internet. Git also has the benefit of having versions for all major operating systems (Windows, MacOS, and Linux). Git also has the benefit of a relatively long and robust life within the computer science community meaning that it’s very stable and has many more resources for the uninitiated to draw upon.

Once one has Git installed on their computer and has begun using it, I’d then recommending linking one’s local copy of the repository to a cloud storage solution like either GitHub or BitBucket.  While GitHub is certainly one of the most popular Git-related services out there (because it acts, in part, as the hub for a large portion of the open internet and thus promotes sharing), I often recommend using BitBucket as it allows free unlimited private but still share-able repositories while GitHub requires a small subscription fee for keeping one’s work private. Having a repository in the cloud will help tremendously in that your work will be available and downloadable from almost anywhere and because it also serves as a de-facto back-up solution for your work.

I’ve recently been playing around with version control to help streamline the writing/editing process for a book I’ve been writing. Though Git and it’s variants probably seem more daunting than they should to the everyday user, they really represent a very powerful tool. I’ve spent less than two days learning the basics of both Git and hosted repositories (GitHub and Bitbucket), and it has been more than well worth the minor effort.

There is a huge wealth of information on revision control in general and on installing and using Git available on the internet, including full textbooks. For the complete beginners, I’d recommend starting with The Chronicle’s “A Gentle Introduction to Version Control.” Keep in mind that though some of these resources look highly technical, it’s because many are trying to enumerate every function one could potentially desire, when even just the basic core functionality is more than enough to begin with. (I could analogize it to learning to drive a car versus actually reading the full manual so that you know how to take the engine apart and put it back together from scratch. To start with revision control, you only need to learn to “drive.”) Professors might also avail themselves of the use of their local institutional libraries which may host small sessions on learning such tools, or they might avail themselves of the help of their colleagues or students in the computer science department. For others, I’d recommend taking a look at Git’s primary website. BitBucket has an excellent step-by-step tutorial (and troubleshooting) for setting up the requisite software and using it.

What do you use for revision control?

I’ll welcome any thoughts, experiences, or additional resources one might want to share with others in the comments.

Syndicated copies to:

Bucket List: Write A Joke for Robin Williams

In all the sadness of the passing of Robin Williams, I nearly forgot I’d “written” a short joke for him just after I’d first moved to Hollywood.

Killing some time just before I started work at Creative Artists Agency, I finagled my way into a rough-cut screening of Robin William’s iconoclastic role in PATCH ADAMS on the Universal Lot. Following the screening, I had the pleasure of chatting with [read: bum-rushed like a crazy fan] Tom Shadyac for a few minutes on the way out. I told him as a recent grad of Johns Hopkins University and having spent a LOT of time in hospitals, that they were missing their obligatory hospital gown joke. But to give it a karate chop (and because I’d just graduated relatively recently), they should put it into the graduation at the “end” and close on a high note.

I didn’t see or hear anything about it until many months later when I went to Mann’s Chinese Theater for the premiere and saw the final cut of the ending of the film, which I’ve clipped below. Just for today, I’m wearing the same red foam clown nose that I wore to the premiere that night.

Thanks for the laughs Robin.

Bucket List:
#1. Write a joke for Robin Williams.

Syndicated copies to:

The Teaching Company and The Great Courses versus MOOCs

Robert Greenberg recently wrote a Facebook post relating to a New York Times review article entitled “For This Class, Professors Pass Screen Test“. It’s substantively about The Teaching Company and their series The Great Courses (TGC); for convenience I’ll excerpt his comments in their entirety below:

A most interesting article on The Great Courses (TGC) appeared in the New York Times on Saturday. TGC has been featured in newspaper articles before: scads of articles, in fact, over the last 20-plus years. But those articles (at least the ones I’m aware of and I am aware of most of them) have always focused on the content of TGC offerings: that they are academic courses offered up on audio/video media. This article, written by the Times’ TV critic Neil Genzlinger, is different. It focuses on TGC as a video production company and on TGC courses as slick, professional, high-end television programs.

My goodness, how times have changed.

Long-time readers of this blog will recall my descriptions of TGC in its early days. I would rehash a bit of that if only to highlight the incredible evolution of the company from a startup to the polished gem it is today.

I made my first course back in May of 1993: the first edition of “How to Listen to and Understand Great Music”. We had no “set”; I worked in front of a blue screen (or a “traveling matte”). The halogen lighting created an unbelievable amount of heat and glare. The stage was only about 6 feet deep but about 20 feet wide. With my sheaf of yellow note paper clutched in my left hand, I roamed back-and-forth, in constant motion, teaching exactly the way I did in the classroom. I made no concessions to the medium; to tell the truth, it never occurred to me or my director at the time that we should do anything but reproduce what I did in the classroom. (My constant lateral movement did, however, cause great consternation among the camera people, who were accustomed to filming stationary pundits at CNN and gasbags at C-span. One of our camera-dudes, a bearded stoner who will remain nameless kept telling me “Man . . . I cannot follow you, man. Please, man, please!” He was a good guy though, and offered to “take my edge off” by lighting me up during our breaks. I wisely declined.)

We worked with a studio audience in those days: mostly retirees who were free to attend such recording sessions, many of whom fell asleep in their chairs after lunch or jingled change in their pockets or whose hearing aids started screaming sounds that they could not hear but I most certainly did. Most distracting were the white Styrofoam coffee cups; in the darkened studio their constant (if irregular) up-and-down motion reminded me of the “bouncing ball” from the musical cartoons of the 1930s, ‘40s, and ‘50s.

I could go on (and I will, at some other time), though the point is made: in its earliest days TGC was simply recording more-or-less what you would hear in a classroom or lecture hall. I am reminded of the early days of TV, during which pre-existing modes of entertainment – the variety show, theatrical productions, puppet shows – were simply filmed and broadcast. In its earliest permutation, the video medium did not create a new paradigm so much as record old ones. But this changed soon enough, and the same is true for TGC. Within a few years TGC became a genuine production company, in which style, look, and mode of delivery became as important as the content being delivered. And this is exactly as it should be. Audio and video media demand clarity and precision; the “ahs” and “ums” and garbled pronunciations and mismatched tenses that we tolerate in a live lecture are intolerable in media, because we are aware of the fact that in making media they can (and should) be corrected.

Enough. Read the article. Then buy another TGC course; preferably one of mine. And while watching and/or listening, let us be aware, as best as we can, of the tens-of-thousands of hours that go into making these courses – these productions – the little masterworks that they indeed are.

 

My response to his post with some thoughts of my own follows:

This is an interesting, but very germane, review. As someone who’s both worked in the entertainment industry and followed the MOOC (massively open online courseware) revolution over the past decade, I very often consider the physical production value of TGCs offerings and have been generally pleased at their steady improvement over time. Not only do they offer some generally excellent content, but they’re entertaining and pleasing to watch. From a multimedia perspective, I’m always amazed at what they offer and that generally the difference between the video versus the audio only versions isn’t as drastic as one might otherwise expect. Though there are times that I think that TGC might include some additional graphics, maps, etc. either in the course itself or in the booklets, I’m impressed that they still function exceptionally well without them.

Within the MOOC revolution, Sue Alcott’s Coursera course Archaeology’s Dirty Little Secrets is still by far the best produced multi-media course I’ve come across. It’s going to take a lot of serious effort for other courses to come up to this level of production however. It’s one of the few courses which I think rivals that of The Teaching Company’s offerings thus far. Unfortunately, the increased competition in the MOOC space is going to eventually encroach on the business model of TGC, and I’m curious to see how that will evolve and how it will benefit students. Will TGC be forced to offer online fora for students to interact with each other the way most MOOCs do? Will MOOCs be forced to drastically increase their production quality to the level of TGC? Will certificates or diplomas be offered for courseware? Will the subsequent models be free (like most MOOCs now), paid like TGC, or some mixture of the two?

One area which neither platform seems to be doing very well at present is offering more advanced coursework. Naturally the primary difficulty is in having enough audience to justify the production effort. The audience for a graduate level topology class is simply far smaller than introductory courses in history or music appreciation, but those types of courses will eventually have to exist to make the enterprises sustainable – in addition to the fact that they add real value to society. Another difficulty is that advanced coursework usually requires some significant work outside of the lecture environment – readings, homework, etc. MOOCs seem to have a slight upper hand here while TGC has generally relied on all of the significant material being offered in a lecture with the suggestion of reading their accompanying booklets and possibly offering supplementary bibliographies. When are we going to start seeing course work at the upper-level undergraduate or graduate level?

The nice part is that with evolving technology and capabilities, there are potentially new pedagogic methods that will allow easier teaching of some material that may not have been possible previously. (For some brief examples, see this post I wrote last week on Latin and the digital humanities.) In particular, I’m sure many of us have been astounded and pleased at how Dr. Greenberg managed the supreme gymnastics of offering of “Understanding the Fundamentals of Music” without delving into traditional music theory and written notation, but will he be able to actually offer that in new and exciting ways to increase our levels of understanding of music and then spawn off another 618 lectures that take us all further and deeper into his exciting world? Perhaps it comes in the form of a multimedia mobile app? We’re all waiting with bated breath, because regardless of how he pulls it off, we know it’s going to be educational, entertaining and truly awe inspiring.

Following my commentary, Scott Ableman, the Chief Marketing Officer for TGC, responded with the following, which I find very interesting:

Chris, all excellent observations (and I agree re Alcott’s course). I hope you’ll be please to learn that the impact of MOOCs, if any, on The Great Courses has been positive, in that there is a rapidly growing awareness and interest in the notion that lifelong learning is possible via digital media. As for differentiating vs. MOOCs, people who know about The Great Courses generally find the differences to be self-evident:

  1. Curation: TGC scours the globe to find the world’s greatest professors;
  2. Formats: The ability to enjoy a course in your car or at home on your TV or on your smartphone, etc.;
  3. Lack of pressure: Having no set schedule and doing things at your own pace with no homework or exams (to be sure, there are some for whom sitting at a keyboard at a scheduled time and taking tests and getting a certificate is quite valuable, but that’s a different audience).

The Great Courses once were the sole claimant to a fourth differentiator, which is depth. Obviously, the proliferation of fairly narrow MOOCs provides as much depth on many topics, and in some cases addresses your desire for higher level courses. Still TGC offers significant depth when compared to the alternatives on TV or audio books. I must say that I was disappointed that Genzlinger chose to focus on this notion that professors these days “don’t know how to lecture.” He suggests that TGC is in the business of teaching bad lecturers how to look good in front of a camera. This of course couldn’t be further from the truth. Anybody familiar with The Great Course knows that among its greatest strengths is its academic recruiting team, which finds professors like Robert Greenberg and introduces them to lifelong learners around the world.

 

Speed Reading on Web and Mobile

“Hi, my name is Chris, and I’m a Read-aholic.”

I

‘ll be the first to admit that I’m a reading junkie, but unfortunately there isn’t (yet) a 12 step program to help me.  I love reading lots of different types of things across an array of platforms (books, newspapers, magazines, computer, web, phone, tablet, apps) and topics (fiction/non-fiction and especially history, biography, economics, popular science, etc.).  My biggest problem and one others surely face is time.

There are so many things I want to read, and far too little time to do it in.  Over the past several years, I’ve spent an almost unreasonable amount of time thinking about what I consume and (possibly more importantly) how to intelligently consume more of it. I’ve spent so much time delving into it that I’ve befriended a professor and fellow renaissance man (literally and figuratively) who gave me a personal thank you in his opening to a best-selling book entitled “The Thinking Life: How to Thrive in an Age of Distraction.”

Information Consumption

At least twice a year I look at my reading consumption and work on how to improve it, all the while trying to maintain a level of quality and usefulness in what I’m consuming and why I’m consuming it.

  • I continually subscribe to new and interesting sources.
  • I close off subscriptions to old sources that I find uninteresting, repetitive (goodbye echo chamber), and those that are (or become) generally useless.
  • I carefully monitor the huge volumes of junk email that end up in my inbox and trim down on the useless material that I never seem to read, so that I’ll have more time to focus on what is important.
  • I’ve taken up listening to audiobooks to better utilize my time in the car while commuting.
  • I’ve generally quit reading large swaths of social media for their general inability to uncover truly interesting sources.
  • I’ve used some portions of social media to find other interesting people collating and curating areas I find interesting, but which I don’t have the time to read through everything myself.  Why waste my time reading hundreds of articles, when I can rely on a small handful of people to read them and filter out the best of the best for myself? Twitter lists in particular are an awesome thing.
  • I’ve given up on things like “listicles” or stories from internet click farm sources like BuzzFeed which can have some truly excellent linkbait-type headlines, but I always felt like I’ve completely wasted my time clicking through to them.

A New Solution

About six months ago in the mountain of tech journalism I love reading, I ran across a site launch notice about a tech start-up called Spritz which promised a radically different solution for the other side of the coin relating to my reading problem: speeding the entire process up!  Unfortunately, despite a few intriguing samples at the time (and some great details on the problem and their solution), they weren’t actually delivering a product.

Well, all that seems to have changed in the past few weeks. I’ve waited somewhat patiently and occasionally checked back on their progress, but following a recent mention on Charlie Rose, and some serious digging around on the broader internet, I’ve found some worthwhile tools that have sprouted out of their efforts.  Most importantly, Spritz itself now has a bookmarklet that seems to deliver on their promise of improving my reading speeds for online content. With the bookmarklet installed, one can go to almost any web article, click on the bookmarklet and then sit back and just read at almost any desired speed.  Their technology uses a modified version of the 1970’s technology known as Rapid Serial Visual Presentation (RSVP) to speed up your reading ability, but does so in a way that is easier to effectuate with web and mobile technologies.  Essentially they present words serially in the same position on your screen with an optimized center mass so that one’s eyes stay still while reading instead of doing the typical saccaddic eye movements which occur with typical reading – and slow the process down.

 

A photo of how Spritz works for speed reading on the web.
Spritz for speed reading the web.

 

As a biomedical engineer, I feel compelled to note the interesting physiologic phenomenon that if one sits in a rotatable chair and spins with one’s eyes closed and their fingers lightly placed on their eyelids, one will feel the eye’s saccades even though one isn’t actually seeing anything.

Spritz also allows one to create an account and log in so that the service will remember your previously set reading speed. Their website does such a great job of explaining their concept, I’ll leave it to the reader to take a peek; but you may want to visit their bookmarklet page directly, as their own website didn’t seem to have a link to it initially.

As a sample of how Spritz works on the web, OysterBooks is hosting a Spritz-able version of Stephen R. Covey’s book 7 Habits of Highly Effective People.

Naturally, Spritz’s solution is not a catch-all for everything I’d like to read, but it covers an interesting subcategory that will make things useful and easier.  Though trying to speed read journal articles, textbooks, and other technical literature isn’t the best idea in the world, Spritz will help me plow through more fiction and more leisurely types of magazine and online articles that are of general interest. I generally enjoy and appreciate these types of journalism and work, but just can’t always justify taking the time away from more academic pursuits to delve into them. Some will still require some further thought after-the-fact to really get their full value out of them, but at least I can cover the additional ground without wasting all the additional time to do so. I find I can easily double or triple my usual reading speed without any real loss of comprehension.

In the last week or so since installing a my several new speed reading bookmarklets, I’ve begun using them almost religiously in my daily reading regimen.

I’ll also note in passing that some studies suggest that this type of reading modality has helped those who face difficulties with dyslexia.

A picture of the Spritz RSVP reading interface featuring the word Boffosocko.
How to read Boffosocko faster than you though you could…

 

Speed Reading Competition

Naturally, since this is a great idea, there’s a bit of competition in the speed reading arena.

There are a small handful of web and app technologies which are built upon the RSVP concept:

  • Clayton Morris has also developed an iOS application called ReadQuick, which is based on the same concept as Spritz, but is only available via app and not on web.
  • Rich Jones has developed a program called OpenSpritz.  His version is opensource and has an Android port for mobile.
  • There’s also another similar bookmarklet called Squirt which also incorporates some nice UI tweaks and some of the technology from Readability as well.
  • For those wishing to Spritz .pdf or .txt documents, one can upload them using Readsy which uses Spritz’s open API to allow these types of functionalities.
  • There are also a variety of similar free apps in the Google Play store which follow the RSVP technology model.
  • Those on the Amazon (or Kindle Fire/Android Platform) will appreciate the Balto App which utilizes RSVP and is not only one of the more fully functional apps in the space, but it also has the ability to unpack Kindle formatted books (i.e. deal with Amazon’s DRM) to allow speed reading Kindle books. While there is a free version, the $1.99 paid version is more than well worth the price for the additional perks.

On and off for the past couple of years, I’ve also used a web service and app called Readfa.st which is a somewhat useful, but generally painful way to improve one’s speed reading. It also has a handy bookmarklet, but just wasn’t as useful as I had always hoped it might be. It’s interesting, but not as interesting or as useful as Spritz (and other RSVP technology) in my opinion since it feels more fatiguing to read in this manner

 

Bookmarklet Junkie Addendum

In addition to the handful of speed reading bookmarklets I’ve mentioned above, I’ve got over 50 bookmarklets in a folder on my web browser toolbar. I easily use about a dozen on a daily basis. Bookmarklets make my internet world much prettier, nicer, and cleaner with a range of simple clever code.  Many are for URL shortening, sharing content to a variety of social networks quickly, but a large number of the ones I use are for reading-related tasks which I feel compelled to include here: web clippers for Evernote and OneNote, Evernote’s Clearly, Readability, Instapaper, Pocket, Mendeley (for reading journal articles), and GoodReads.

Do you have a favorite speed reading application (or bookmarklet)?

Syndicated copies to:

How to Sidestep Mathematical Equations in Popular Science Books

In the publishing industry there is a general rule-of-thumb that every mathematical equation included in a book will cut the audience of science books written for a popular audience in half – presumably in a geometric progression. This typically means that including even a handful of equations will give you an effective readership of zero – something no author and certainly no editor or publisher wants.

I suspect that there is a corollary to this that every picture included in the text will help to increase your readership, though possibly not by as proportionally a large amount.

In any case, while reading Melanie Mitchell’s text Complexity: A Guided Tour [Cambridge University Press, 2009] this weekend, I noticed that, in what appears to be a concerted effort to include an equation without technically writing it into the text and to simultaneously increase readership by including a picture, she cleverly used a picture of Boltzmann’s tombstone in Vienna! Most fans of thermodynamics will immediately recognize Boltzmann’s equation for entropy, S = k log W , which appears engraved on the tombstone over his bust.

Page 51 of Melanie Mitchell's book "Complexity: A Guided Tour"
Page 51 of Melanie Mitchell’s book “Complexity: A Guided Tour” featuring Boltzmann’s tombstone in Vienna.

I hope that future mathematicians, scientists, and engineers will keep this in mind and have their tombstones engraved with key formulae to assist future authors in doing the same – hopefully this will help to increase the amount of mathematics that is deemed “acceptable” by the general public.

John C. Malone on Assets in the Entertainment Industry

John C. Malone (1941 – ), American business executive, landowner, and philanthropist
at Sun Valley Conference 2012, quoted in New York Times

 

Syndicated copies to:

Academy of Motion Picture Arts & Sciences study on The Digital Dilemma

With a slight nod toward the Academy’s announcements of the Oscar nominees this morning, there’s something more interesting which they’ve recently released which hasn’t gotten nearly as much press, but portends to be much more vital in the long run.

Academy_awards

As books enter the digital age and we watch the continued convergence of rich media like video and audio enter into e-book formats with announcements last week like Apple’s foray into digital publishing, the ability to catalog, maintain and store many types of digital media is becoming an increasing problem.  Last week the Academy released part two of their study on strategic issues in archiving and accessing digital motion picture materials in their report entitled The Digital Dilemma 2. Many of you will find it interesting/useful, particularly in light of the Academy’s description

The Digital Dilemma 2 reports on digital preservation issues facing communities that do not have the resources of large corporations or other well-funded institutions: independent filmmakers, documentarians and nonprofit audiovisual archives.

Clicking on the image of the report below provides some additional information as well as the ability (with a simple login) to download a .pdf copy of their entire report.

Digitaldilemma

There is also a recent Variety article which gives a more fully fleshed out overview of many of the issues at hand.

In the meanwhile, if you’re going to make a bet in this year’s Oscar pool, perhaps putting your money on the “Digital Dilemma” might be more useful than on Brad Pitt for Best Actor in “Moneyball”?

Masara Ibuka on the Purposes of Incorporation of Sony

Masara Ibuka (), co-founder of Sony Corporation
on the first “Purposes of Incorporation” of Sony

 

Syndicated copies to:

Barnes & Noble Board Would Face Tough Choices in a Buyout Vote | Dealbook

Barnes & Noble Faces Tough Choices in a Buyout Vote by Steven Davidoff Solomon (DealBook)
If Leonard Riggio, Barnes & Noble's chairman, joins Liberty Media's proposed buyout of his company, the board needs to decide how to handle his 30 percent stake before shareholders vote on the deal.
Media_httpgraphics8ny_rfodt

 
This story from the New York Times’ Dealbook is a good quick read on some of the details and machinations of the Barnes & Noble buyout. Perhaps additional analysis on it from a game theoretical viewpoint would yield new insight?

Syndicated copies to:

IPTV primer: an overview of the fusion of TV and the Internet | Ars Technica

IPTV primer: an overview of the fusion of TV and the Internet by Iljitsch Van BeijnumIljitsch Van Beijnum (Ars Technica)

This brief overview of IPTV is about as concise as they get. It’s recommended for entertainment executives who need to get caught up on the space as well as for people who are contemplating “cutting the cable cord.” There’s still a lot of improvement the area can use…

Profound as it may be, the Internet revolution still pales in comparison to that earlier revolution that first brought screens in millions of homes: the TV revolution. Americans still spend more of their non-sleep, non-work time on watching TV than on any other activity. And now the immovable object (the couch potato) and the irresistible force (the business-model destroying Internet) are colliding.

For decades, the limitations of technology only allowed viewers to watch TV programs as they were broadcast. Although limiting, this way of watching TV has the benefit of simplicity: the viewer only has to turn on the set and select a channel. They then get to see what was deemed broadcast-worthy at that particular time. This is the exact opposite of the Web, where users type a search query or click a link and get their content whenever they want. Unsurprisingly, TV over the Internet, a combination that adds Web-like instant gratification to the TV experience, has seen an enormous growth in popularity since broadband became fast enough to deliver decent quality video. So is the Internet going to wreck TV, or is TV going to wreck the Internet? Arguments can certainly be made either way.

The process of distributing TV over a data network such as the Internet, a process often called IPTV, is a little more complex than just sending files back and forth. Unless, that is, a TV broadcast is recorded and turned into a file. The latter, file-based model is one that Apple has embraced with its iTunes Store, where shows are simply downloaded like any other file. This has the advantage that shows can be watched later, even when there is no longer a network connection available, but the download model doesn’t exactly lend itself to live broadcasts—or instant gratification, for that matter.

Streaming

Most of the new IPTV services, like Netflix and Hulu, and all types of live broadcasts use a streaming model. Here, the program is set out in real time. The computer—or, usually by way of a set-top-box, the TV—decodes the incoming stream of audio and video and then displays it pretty much immediately. This has the advantage that the video starts within seconds. However, it also means that the network must be fast enough to carry the audio/video at the bitrate that it was encoded with. The bitrate can vary a lot depending on the type of program—talking heads compress a lot better than car crashes—but for standard definition (SD) video, think two megabits per second (Mbps).

To get a sense just how significant this 2Mbps number is, it’s worth placing it in the context of the history of the Internet, as it has moved from transmitting text to images to audio and video. A page of text that takes a minute to read is a few kilobytes in size. Images are tens to a few hundred kilobytes. High quality audio starts at about 128 kilobits per second (kbps), or about a megabyte per minute. SD TV can be shoehorned in some two megabits per second (Mbps), or about 15 megabytes per minute. HDTV starts around 5Mbps, 40 megabytes per minute. So someone watching HDTV over the Internet uses about the same bandwidth as half a million early-1990s text-only Web surfers. Even today, watching video uses at least ten times as much bandwidth as non-video use of the network.

In addition to raw capacity, streaming video also places other demands on the network. Most applications communicate through TCP, a layer in the network stack that takes care of retransmitting lost data and delivering data to the receiving application in the right order. This is despite the fact that the IP packets that do TCP’s bidding may arrive out of order. And when the network gets congested, TCP’s congestion control algorithms slow down the transmission rate at the sender, so the network remains usable.

However, for real-time audio and video, TCP isn’t such a good match. If a fraction of a second of audio or part of a video frame gets lost, it’s much better to just skip over the lost data and continue with what follows, rather than wait for a retransmission to arrive. So streaming audio and video tended to run on top of UDP rather than TCP. UDP is the thinnest possible layer on top of IP and doesn’t care about lost packets and such. But UDP also means that TCP’s congestion control is out the door, so a video stream may continue at full speed even though the network is overloaded and many packets—also from other users—get lost. However, more advanced streaming solutions are able to switch to lower quality video when network conditions worsen. And Apple has developed a way to stream video using standard HTTP on top of TCP, by splitting the stream into small files that are downloaded individually. Should a file fail to download because of network problems, it can be skipped, continuing playback with the next file.

Where are the servers? Follow the money

Like any Internet application, streaming of TV content can happen from across town or across the world. However, as the number of users increases, the costs of sending such large amounts of data over large distances become significant. For this reason, content delivery networks (CDNs), of which Akamai is probably the most well-known, try to place servers as close to the end-users as possible, either close to important interconnect locations where lots of Internet traffic comes together, or actually inside the networks of large ISPs.

Interestingly, it appears that CDNs are actually paying large ISPs for this privilege. This makes the IPTV business a lot like the cable TV business. On the Internet, the assumption is that both ends (the consumer and the provider of over-the-Internet services) pay their own ISPs for the traffic costs, and the ISPs just transport the bits and aren’t involved otherwise. In the cable TV world, this is very different. An ISP provides access to the entire Internet; a cable TV provider doesn’t provide access to all possible TV channels. Often, the cable companies pay for access to content.

A recent dispute between Level3 and Comcast can be interpreted as evidence of a power struggle between the CDNs and the ISPs in the IPTV arena.

Walled gardens

For services like Netflix or Hulu, where everyone is watching their own movie or their own show, streaming makes a lot of sense. Not so much with live broadcasts.

So far, we’ve only been looking at IPTV over the public Internet. However, many ISPs around the world already provide cable-like service on top of ADSL or Fiber-To-The-Home (FTTH). With such complete solutions, the ISPs can control the whole service, from streaming servers to the set-top box that decodes the IPTV data and delivers it to a TV. This “walled garden” type of IPTV typically provides a better and more TV-like experience—changing channels is faster, image quality is better, and the service is more reliable.

Such an IPTV Internet access service is a lot like what cable networks provide, but there is a crucial difference: with cable, the bandwidth of the analog cable signal is split into channels, which can be used for analog or digital TV broadcasts or for data. TV and data don’t get in each other’s way. With IPTV on the other hand, TV and Internet data are communication vessels: what is used by one is unavailable to the other. And to ensure a good experience, IPTV packets are given higher priority than other packets. When bandwidth is plentiful, this isn’t an issue, but when a network fills up to the point that Internet packets regularly have to take a backseat to IPTV packets, this could easily become a network neutrality headache.

Multicast to the rescue

Speaking of networks that fill up: for services like Netflix or Hulu, where everyone is watching their own movie or their own show, streaming makes a lot of sense. Not so much with live broadcasts. If 30 million people were to tune into Dancing with the Stars using streaming, that means 30 million copies of each IPTV packet must flow down the tubes. That’s not very efficient, especially given that routers and switches have the capability to take one packet and deliver a copy to anyone who’s interested. This ability to make multiple copies of a packet is called multicast, and it occupies territory between broadcasts, which go to everyone, and regular communications (called unicast), which go to only one recipient. Multicast packets are addressed to a special group address. Only systems listening for the right group address get a copy of the packet.

Multicast is already used in some private IPTV networks, but it has never gained traction on the public Internet. Partially, this is a chicken/egg situation, where there is no demand because there is no supply and vice versa. But multicast is also hard to make work as the network gets larger and the number of multicast groups increases. However, multicast is very well suited to broadcast type network infrastructures, such as cable networks and satellite transmission. Launching multiple satellites that just send thousands of copies of the same packets to thousands of individual users would be a waste of perfectly good rockets.

Peer-to-peer and downloading

Converging to a single IP network that can carry the Web, other data services, telephony, and TV seems like a no-brainer.

Multicast works well for a relatively limited number of streams that are each watched by a reasonably sized group of people—but having very many multicast groups takes up too much memory in routers and switches. For less popular content, there’s another delivery method that requires no or few streaming servers: peer-to-peer streaming. This was the technology used by the Joost service in 2007 and 2008. With peer-to-peer streaming, all the systems interested in a given stream get blocks of audio/video data from upstream peers, and then send those on to downstream peers. This approach has two downsides: the bandwidth of the stream has to be limited to fit within the upload capacity of most peers, and changing channels is a very slow process because a whole new set of peers must be contacted.

For less time-critical content, downloading can work very well. Especially in a form like podcasts, where an RSS feed allows a computer to download new episodes of shows without user intervention. It’s possible to imagine a system where regular network TV shows are made available for download one or two days before they air—but in encrypted form. Then, “airing” the show would just entail distributing the decryption keys to viewers. This could leverage unused network capacity at night. Downloads might also happen using IP packets with a lower priority, so they don’t get in the way of interactive network use.

IP addresses and home networks

A possible issue with IPTV could be the extra IP addresses required. There are basically two approaches to handling this issue: the one where the user is in full control, and the one where an IPTV service provider (usually the ISP) has some control. In the former case, streaming and downloading happens through the user’s home network and no extra addresses are required. However, wireless home networks may not be able to provide bandwidth with enough consistency to make streaming work well, so pulling Ethernet cabling may be required.

When the IPTV provider provides a set-top box, it’s often necessary to address packets toward that set-top box, so the box must be addressable in some way. This can eat up a lot of addresses, which is a problem in these IPv4-starved times. For really large ISPs, the private address ranges in IPv4 may not even be sufficient to provide a unique address to every customer. Issues in this area are why Comcast has been working on adopting IPv6 in the non-public part of its network for many years. When an IPTV provider provides a home gateway, this gateway is often outfitted with special quality-of-service mechanisms that make (wireless) streaming work better than run-of-the-mill home gateways that treat all packets the same.

Predicting the future

Converging to a single IP network that can carry the Web, other data services, telephony, and TV seems like a no-brainer. The phone companies have been working on this for years because that will allow them to buy cheap off-the-shelf routers and switches, rather than the specialty equipment they use now. So it seems highly likely that in the future, we’ll be watching our TV shows over the Internet—or at least over an IP network of some sort. The extra bandwidth required is going to be significant, but so far, the Internet has been able to meet all challenges thrown at it in this area. Looking at the technologies, it would make sense to combine nightly pushed downloads for popular non-live content, multicast for popular live content, and regular streaming or peer-to-peer streaming for back catalog shows and obscure live content.

However, the channel flipping model of TV consumption has proven to be quite popular over the past half century, and many consumers may want to stick with it—for at least part of their TV viewing time. If nothing else, this provides an easy way to discover new shows. The networks are also unlikely to move away from this model voluntarily, because there is no way they’ll be able to sell 16 minutes of commercials per hour using most of the other delivery methods. However, we may see some innovations. For instance, if you stumble upon a show in progress, wouldn’t it be nice to be able to go back to the beginning? In the end, TV isn’t going anywhere, and neither is the Internet, so they’ll have to find a way to live together.

Correction: The original article incorrectly stated that cable providers get paid by TV networks. For broadcast networks, cable operators are required by the law’s “must carry” provisions to carry all of the TV stations broadcast in a market. Ars regrets the error.

Confessions of David Seidler, a 73-year-old Oscars virgin

Confessions of David Seidler, a 73-year-old Oscars virgin by David Seidler (LA Times)
My first realization I was hooked on Oscar was when I seriously began pondering one of mankind's most profound dilemmas: whether to rent or buy a tux. That first step, as with any descent down a...

This is a great (and hilarious) story by and about the writer of THE KING’S SPEECH.

Amplify’d from www.latimes.com

Confessions of David Seidler, a 73-year-old Oscars virgin

The screenwriter, whose first nomination was for ‘The King’s Speech,’ ponders his formalwear options for the big night, his standing in Hollywood and much more.

Failings and Opportunities of the Publishing Industry in the Digital Age

On Sunday, the Los Angeles Times printed a story about the future of reading entitled "Book publishers see their role as gatekeepers shrink." The article covers most of the story fairly well, but leaves out some fundamental pieces of the business ...

On Sunday, the Los Angeles Times printed a story about the future of reading entitled “Book publishers see their role as gatekeepers shrink.” 

The article covers most of the story fairly well, but leaves out some fundamental pieces of the business picture.  It discusses a few particular cases of some very well known authors in the publishing world including the likes of Stephen King, Seth Godin, Paulo Coehlo, Greg Bear, and Neal Stephenson and how new digital publishing platforms are slowly changing the publishing business.

Indeed, many authors are bypassing traditional publishing routes and self-publishing their works directly online, and many are taking a much larger slice of the financial rewards in doing so.

The article, however, completely fails to mention or address how new online methods will be handling editorial and publicity functions differently than they’re handled now, and the future of the publishing business both now and in the future relies on both significantly.

It is interesting, and not somewhat ironic to note that, even in the case of this particular article, as the newspaper business in which it finds its outlet, has changed possibly more drastically than the book publishing business. If reading the article online, one is forced to click through four different pages on which a minimum of five different (and in my opinion, terrifically) intrusive ads appear per page. Without getting into the details of the subject of advertising, even more interesting, is that many of these ads are served up by Google Ads based on keywords, so three just on the first page were specifically publishing related.

Two of the ads were soliciting people to self-publish their own work. One touts how easy it is to publish, while the other glosses over the publicity portion with a glib statement offering an additional “555 Book Promotion Tips”! (I’m personally wondering if there can possibly be so many book promotion tips?)

Google_ads
Google_children

Following the link in the third ad on the first page to its advertised site one discovers it states:

Learning how to publish a children’s book is no child’s play.

From manuscript editing to book illustration, distribution and marketing – a host of critical decisions can make or break your publishing venture.

Fortunately, you can skip the baby steps and focus on what authors like you do best-crafting the best children’s book possible for young inquisitive minds. Leave the rest to us.

Count on the collective publishing and book marketing expertise of a children book publisher with over thirteen years’ experience. We have helped over 20,000 independent authors fulfill their dream of publication.

Take advantage of our extensive network of over 25,000 online bookstores and retailers that include such names Amazon, Barnes & Noble and Borders, among thousands of others.

Tell us about your Children’s book project and we will send you a Free Children’s Book Publishing Guide to start you off on your publishing adventure!

 

Although I find the portion about “baby steps” particularly entertaining, the first thing I’ll note is that the typical person is likely more readily equipped with the ability to distribute and market a children’s book than they might be at crafting one. Sadly however, there are very few who are capable of any of these tasks at a particularly high level, which is why there are relatively few new childrens’ books on the market each year and the majority of sales are older tried-and-true titles.

I hope the average reader sees the above come-on as the twenty-first century equivalent of the snake oil salesman who is tempting the typical wanna-be-author to call about their so-called “Free” Children’s Book Publishing Guide. I’m sure recipients of the guide end up paying the publisher to get their book out the door and more likely than not, it doesn’t end up in main stream brick-and-mortar establishments like Barnes & Noble or Borders, but only sells a handful of copies in easy to reach online venues like Amazon. I might suggest that the majority of sales will come directly from the author and his or her friends and family. I would further argue that neither now nor in the immediate or even distant future that many aspiring authors will be self-publishing much of anything and managing to make even a modest living by doing so.

Now of course all of the above begs the question of why exactly is it that people need/want a traditional publisher? What role or function do publishers actually perform for the business and why might they be around in the coming future?

The typical publishing houses perform three primary functions: filtering/editing material, distributing material, and promoting material. The current significant threat to the publishing business from online retailers like Amazon.com, Barnes & Noble, Borders, and even the recently launched Google Books is the distribution platforms themselves.  It certainly doesn’t take much to strike low cost deals with online retailers to distribute books, and even less so when they’re distributing them as e-books which cuts out the most significant cost in the business — that of the paper to print them on. This leaves traditional publishing houses with two remaining functions: filtering/editing material and the promotion/publicity function.

The Los Angeles Times article certainly doesn’t state it, but everyone you meet on the street could tell you that writers like Stephen King don’t really need any more publicity than what they’ve got already. Their fan followings are so significantly large that they only need to tell two people online that they’ve got a new book and they’ll sell thousands of copies of any book they release. In fact, I might wager that Stephen King could release ten horrific (don’t mistake this for horror) novels before their low quality would likely begin to significantly erode his sales numbers.  If he’s releasing them on Amazon.com and keeping 70% of the income compared to the average 6-18% most writers are receiving, he’s in phenomenally good shape. (I’m sure given his status and track record in the publishing business, he’s receiving a much larger portion of his book sales from his publisher than 18% by the way; I’d also be willing to bet if he approached Amazon directly, he could get a better distribution deal than the currently offered 70/30 split.)

What will eventually sway the majority of the industry is when completely unknown new writers can publish into these electronic platforms and receive the marketing push they need to become the next Stephen King or Neal Stephenson. At the moment, none of the major e-book publishing platforms are giving much, if any, of this type of publicity to any of their new authors, and many aren’t even giving it to the major writers. Thus, currently, even the major writers are relying primarily on their traditional publishers for publicity to push their sales.

I will admit that when 80% of all readers are online and consuming their reading material in e-book format and utilizing the full support of social media and cross-collateralization of the best portion of their word-of-mouth, that perhaps authors won’t need as much PR help. But until that day platforms will significantly need to ramp it up. Financially one wonders what a platform like Amazon.com will charge for a front and center advertisement for a new best-seller to push sales? Will they be looking for a 50/50 split on those sales? Exclusivity in their channel? This is where the business will become even more dicey. Suddenly authors who think they’re shedding the chains of their current publishers will be shackling themselves with newer and more significant manacles and leg irons.

The last piece of the business that needs to be subsumed is the editorial portion of the manufacturing process.  Agents and editors serve a significant role in that they filter out thousands and thousands of terrifically unreadable books. In fact, one might argue that even now they’re letting far too many marginal books through the system and into the market.

If we consider the millions of books housed in the Library of Congress and their general circulation, one might realize that only one tenth of a percent or less of books are receiving all the attention. Certainly classics like William Shakespeare and Charles Dickens are more widely read than the millions of nearly unknown writers who take up just as much shelf space in that esteemed library.

Most houses publish on the order of ten to a hundred titles per year, but they rely heavily on only one or two of them being major hits to cover not only the cost of the total failures, but to provide the company with some semblance of profit.  (This model is not unlike the same way that the feature film business works in Hollywood; if you throw enough spaghetti, something is bound to stick.)

The question then becomes: “how does the e-publishing business accomplish this editing and publicity in a better and less expensive way?” This question needs to be looked at from a pre-publication as well as a post-publication perspective.

From the pre-publication viewpoint the Los Angeles Times article interestingly mentions that many authors appreciate having a “conversation” with their readers and allowing it to inform their work. However, creators of the stature of Stephen King cannot possibly take in and consume criticism from their thousands of fans in any reasonable way not to mention the detriment to their output if they were forced to read and deal with all that criticism and feedback.  Even smaller stature authors often find it overwhelming to take in criticism from their agents, editors, and even a small handful of close friends, family, and colleagues.  Taking a quick look at the acknowledgement portions of a few dozen books generally reveals fewer than 10 people being thanked much less hundreds of names from their general reading public – people they neither know well, much less trust implicitly.

From the post-publication perspective, both printing on demand and e-book formats excise one of the largest costs of the supply chain management portions of the publishing world, but staff costs and salary are certainly very close in line after them.  One might argue that social media is the answer here and we can rely on services like LibraryThing, GoodReads, and others to supply this editorial/publicity process and eventually broad sampling and positive and negative reviews will win the day to cross good, but unknown writers into the popular consciousness. This may sound reasonable on the surface, but take a look at similar large recommendation services in the social media space like Yelp. These services already have hundreds of thousands of users, but they’re not nearly as useful as they need to be from a recommendation perspective and they’re not terrifically reliable in that they’re very often easily gamed. (Consider the number of positive reviews that appear on Yelp that are most likely written by the proprietors of the establishments themselves.) This outlet for editorial certainly has the potential to improve in the coming years, but it will still be quite some time before it has the possibility of totally ousting the current editorial and filtering regime.

From a mathematical and game theoretical perspective one must also consider how many people are going to subject themselves (willingly and for free) to some really bad reading material and then bother to write either a good or bad review of their experience. This particularly when the vast majority of readers are more than content to ride the coattails of the “suckers” who do the majority of the review work.

There are certainly a number of other factors at play in the publishing business as it changes form, but those discussed above are certainly significant in its continuing evolution.  Given the state of technology and its speed, if people feel that the tradition publishing world will collapse, then we should take its evolution to the nth degree. Using an argument like this, then even platforms like Amazon and Google Books will eventually need to narrow their financial split with authors down to infinitesimal margins as authors should be able to control every portion of their work without any interlopers taking any portion of their proceeds. We’ll leave the discussion of whether all of this might fit into the concept of the tragedy of the commons for a future date.

Is the Los Angeles Times Simply Publishing Press Releases for Companies Like Barnes & Noble?

The Los Angeles Times published an online article entitled “Barnes & Noble says e-books outsell physical books online.” While I understand that this is a quiet holiday week, the Times should be doing better work than simply republishing press releases from corporations trying to garner post-holiday sales.  Some of the thoughts they might have included:

“Customers bought or downloaded 1 million e-books on Christmas day alone”?

There is certainly no debating the continuous growth of the electronic book industry; even Amazon.com has said they’re selling more electronic books than physical books. The key word in the quoted sentence above is “or”. I seriously doubt a significant portion of the 1 million e-books were actually purchased on Christmas day. The real investigative journalism here would have discovered the percentage of free (primarily public domain) e-books that were downloaded versus those that were purchased.

Given that analysts estimate 2 million Nooks have sold (the majority within the last six months and likely the preponderance of them since Thanksgiving) this means that half of all Nook users downloaded at least one book on Christmas day. Perhaps this isn’t surprising for those who would have received a Nook as a holiday present and may have downloaded one or more e-books to test out its functionality. The real question will remain, how many of these 2 million users will actually be purchasing books in e-book format 6 months from now?

I’d also be curious to know if the analyst estimate is 2 million units sold to consumers or 2 million shipped to retail? I would bet that it is units shipped and not sold.

I hope the Times will be doing something besides transcription (or worse: cut and paste) after the holidays.

 


New Measures of Scholarly Impact | Inside Higher Ed

New Measures of Scholarly Impact (insidehighered.com)
Data analytics are changing the ways to judge the influence of papers and journals.

This article from earlier in the month has some potentially profound affects on the research and scientific communities. Some of the work and research being done here will also have significant affect on social media communities in the future as well.

The base question is are citations the best indicator of impact, or are there other better emerging methods of indicating the impact of scholarly work?