👓 The Scientific Paper Is Obsolete | The Atlantic

Read The Scientific Paper Is Obsolete by James Somers (The Atlantic)
The scientific paper—the actual form of it—was one of the enabling inventions of modernity. Before it was developed in the 1600s, results were communicated privately in letters, ephemerally in lectures, or all at once in books. There was no public forum for incremental advances. By making room for reports of single experiments or minor technical advances, journals made the chaos of science accretive. Scientists from that point forward became like the social insects: They made their progress steadily, as a buzzing mass.

The earliest papers were in some ways more readable than papers are today. They were less specialized, more direct, shorter, and far less formal. Calculus had only just been invented. Entire data sets could fit in a table on a single page. What little “computation” contributed to the results was done by hand and could be verified in the same way.

Not quite the cutting edge stuff I would have liked, but generally an interesting overview of relatively new technology and UI set ups like Mathematica and Jupyter.

Syndicated copies to:

Organizing my research related reading

There’s so much great material out there to read and not nearly enough time. The question becomes: “How to best organize it all, so you can read even more?”

I just came across a tweet from Michael Nielsen about the topic, which is far deeper than even a few tweets could do justice to, so I thought I’d sketch out a few basic ideas about how I’ve been approaching it over the last decade or so. Ideally I’d like to circle back around to this and better document more of the individual aspects or maybe even make a short video, but for now this will hopefully suffice to add to the conversation Michael has started.

Keep in mind that this is an evolving system which I still haven’t completely perfected (and may never), but to a great extent it works relatively well and I still easily have the ability to modify and improve it.

Overall Structure

The first piece of the overarching puzzle is to have a general structure for finding, collecting, triaging, and then processing all of the data. I’ve essentially built a simple funnel system for collecting all the basic data in the quickest manner possible. With the basics down, I can later skim through various portions to pick out the things I think are the most valuable and move them along to the next step. Ultimately I end up reading the best pieces on which I make copious notes and highlights. I’m still slowly trying to perfect the system for best keeping all this additional data as well.

Since I’ve seen so many apps and websites come and go over the years and lost lots of data to them, I far prefer to use my own personal website for doing a lot of the basic collection, particularly for online material. Toward this end, I use a variety of web services, RSS feeds, and bookmarklets to quickly accumulate the important pieces into my personal website which I use like a modern day commonplace book.

Collecting

In general, I’ve been using the Inoreader feed reader to track a large variety of RSS feeds from various clearinghouse sources (including things like ProQuest custom searches) down to individual researcher’s blogs as a means of quickly pulling in large amounts of research material. It’s one of the more flexible readers out there with a huge number of useful features including the ability to subscribe to OPML files, which many readers don’t support.

As a simple example arXiv.org has an RSS feed for the topic of “information theory” at http://arxiv.org/rss/math.IT which I subscribe to. I can quickly browse through the feed and based on titles and/or abstracts, I can quickly “star” the items I find most interesting within the reader. I have a custom recipe set up for the IFTTT.com service that pulls in all these starred articles and creates new posts for them on my WordPress blog. To these posts I can add a variety of metadata including top level categories and lower level tags in addition to other additional metadata I’m interested in.

I also have similar incoming funnel entry points via many other web services as well. So on platforms like Twitter, I also have similar workflows that allow me to use services like IFTTT.com or Zapier to push the URLs easily to my website. I can quickly “like” a tweet and a background process will suck that tweet and any URLs within it into my system for future processing. This type of workflow extends to a variety of sites where I might consume potential material I want to read and process. (Think academic social services like Mendeley, Academia.com, Diigo, or even less academic ones like Twitter, LinkedIn, etc.) Many of these services often have storage ability and also have simple browser bookmarklets that allow me to add material to them. So with a quick click, it’s saved to the service and then automatically ported into my website almost without friction.

My WordPress-based site uses the Post Kinds Plugin which takes incoming website URLs and does a very solid job of parsing those pages to extract much of the primary metadata I’d like to have without requiring a lot of work. For well structured web pages, it’ll pull in the page title, authors, date published, date updated, synopsis of the page, categories and tags, and other bits of data automatically. All these fields are also editable and searchable. Further, the plugin allows me to configure simple browser bookmarklets so that with a simple click on a web page, I can pull its URL and associated metadata into my website almost instantaneously. I can then add a note or two about what made me interested in the piece and save it for later.

Note here, that I’m usually more interested in saving material for later as quickly as I possibly can. In this part of the process, I’m rarely ever interested in reading anything immediately. I’m most interested in finding it, collecting it for later, and moving on to the next thing. This is also highly useful for things I find during my busy day that I can’t immediately find time for at the moment.

As an example, here’s a book I’ve bookmarked to read simply by clicking “like” on a tweet I cam across late last year. You’ll notice at the bottom of the post, I’ve optionally syndicated copies of the post to other platforms to “spread the wealth” as it were. Perhaps others following me via other means may see it and find it useful as well?

Triaging

At regular intervals during the week I’ll sit down for an hour or two to triage all the papers and material I’ve been sucking into my website. This typically involves reading through lots of abstracts in a bit more detail to better figure out what I want to read now and what I’d like to read at a later date. I can delete out the irrelevant material if I choose, or I can add follow up dates to custom fields for later reminders.

Slowly but surely I’m funneling down a tremendous amount of potential material into a smaller, more manageable amount that I’m truly interested in reading on a more in-depth basis.

Document storage

Calibre with GoodReads sync

Even for things I’ve winnowed down, there is still a relatively large amount of material, much of it I’ll want to save and personally archive. For a lot of this function I rely on the free multi-platform desktop application Calibre. It’s essentially an iTunes-like interface, but it’s built specifically for e-books and other documents.

Within it I maintain a small handful of libraries. One for personal e-books, one for research related textbooks/e-books, and another for journal articles. It has a very solid interface and is extremely flexible in terms of configuration and customization. You can create a large number of custom libraries and create your own searchable and sort-able fields with a huge variety of metadata. It often does a reasonable job of importing e-books, .pdf files, and other digital media and parsing out their meta data which prevents one from needing to do some of that work manually. With some well maintained metadata, one can very quickly search and sort a huge amount of documents as well as quickly prioritize them for action. Additionally, the system does a pretty solid job of converting files from one format to another, so that things like converting an .epub file into a .mobi format for Kindle are automatic.

Calibre stores the physical documents either in local computer storage, or even better, in the cloud using any of a variety of services including Dropbox, OneDrive, etc. so that one can keep one’s documents in the cloud and view them from a variety of locations (home, work, travel, tablet, etc.)

I’ve been a very heavy user of GoodReads.com for years to bookmark and organize my physical and e-book library and anti-libraries. Calibre has an exceptional plugin for GoodReads that syncs data across the two. This (and a few other plugins) are exceptionally good at pulling in missing metadata to minimize the amount that must be done via hand, which can be tedious.

Within Calibre I can manage my physical books, e-books, journal articles, and a huge variety of other document related forms and formats. I can also use it to further triage and order the things I intend to read and order them to the nth degree. My current Calibre libraries have over 10,000 documents in them including over 2,500 textbooks as well as records of most of my 1,000+ physical books. Calibre can also be used to add document data that one would like to ultimately acquire the actual documents, but currently don’t have access to.

BibTeX and reference management

In addition to everything else Calibre also has some well customized pieces for dovetailing all its metadata as a reference management system. It’ll allow one to export data in a variety of formats for document publishing and reference management including BibTex formats amongst many others.

Reading, Annotations, Highlights

Once I’ve winnowed down the material I’m interested in it’s time to start actually reading. I’ll often use Calibre to directly send my documents to my Kindle or other e-reading device, but one can also read them on one’s desktop with a variety of readers, or even from within Calibre itself. With a click or two, I can automatically email documents to my Kindle and Calibre will also auto-format them appropriately before doing so.

Typically I’ll send them to my Kindle which allows me a variety of easy methods for adding highlights and marginalia. Sometimes I’ll read .pdf files via desktop and use Adobe to add highlights and marginalia as well. When I’m done with a .pdf file, I’ll just resave it (with all the additions) back into my Calibre library.

Exporting highlights/marginalia to my website

For Kindle related documents, once I’m finished, I’ll use direct text file export or tools like clippings.io to export my highlights and marginalia for a particular text into simple HTML and import it into my website system along with all my other data. I’ve briefly written about some of this before, though I ought to better document it. All of this then becomes very easily searchable and sort-able for future potential use as well.

Here’s an example of some public notes, highlights, and other marginalia I’ve posted in the past.

Synthesis

Eventually, over time, I’ve built up a huge amount of research related data in my personal online commonplace book that is highly searchable and sortable! I also have the option to make these posts and pages public, private, or even password protected. I can create accounts on my site for collaborators to use and view private material that isn’t publicly available. I can also share posts via social media and use standards like webmention and tools like brid.gy so that comments and interactions with these pieces on platforms like Facebook, Twitter, Google+, and others is imported back to the relevant portions of my site as comments. (I’m doing it with this post, so feel free to try it out yourself by commenting on one of the syndicated copies.)

Now when I’m ready to begin writing something about what I’ve read, I’ve got all the relevant pieces, notes, and metadata in one centralized location on my website. Synthesis becomes much easier. I can even have open drafts of things as I’m reading and begin laying things out there directly if I choose. Because it’s all stored online, it’s imminently available from almost anywhere I can connect to the web. As an example, I used a few portions of this workflow to actually write this post.

Continued work

Naturally, not all of this is static and it continues to improve and evolve over time. In particular, I’m doing continued work on my personal website so that I’m able to own as much of the workflow and data there. Ideally I’d love to have all of the Calibre related piece on my website as well.

Earlier this week I even had conversations about creating new post types on my website related to things that I want to read to potentially better display and document them explicitly. When I can I try to document some of these pieces either here on my own website or on various places on the IndieWeb wiki. In fact, the IndieWeb for Education page might be a good place to start browsing for those interested.

One of the added benefits of having a lot of this data on my own website is that it not only serves as my research/data platform, but it also has the traditional ability to serve as a publishing and distribution platform!

Currently, I’m doing most of my research related work in private or draft form on the back end of my website, so it’s not always publicly available, though I often think I should make more of it public for the value of the aggregation nature it has as well as the benefit it might provide to improving scientific communication. Just think, if you were interested in some of the obscure topics I am and you could have a pre-curated RSS feed of all the things I’ve filtered through piped into your own system… now multiply this across hundreds of thousands of other scientists? Michael Nielsen posts some useful things to his Twitter feed and his website, but what I wouldn’t give to see far more of who and what he’s following, bookmarking, and actually reading? While many might find these minutiae tedious, I guarantee that people in his associated fields would find some serious value in it.

I’ve tried hundreds of other apps and tools over the years, but more often than not, they only cover a small fraction of the necessary moving pieces within a much larger moving apparatus that a working researcher and writer requires. This often means that one is often using dozens of specialized tools upon which there’s a huge duplication of data efforts. It also presumes these tools will be around for more than a few years and allow easy import/export of one’s hard fought for data and time invested in using them.

If you’re aware of something interesting in this space that might be useful, I’m happy to take a look at it. Even if I might not use the service itself, perhaps it’s got a piece of functionality that I can recreate into my own site and workflow somehow?

If you’d like help in building and fleshing out a system similar to the one I’ve outlined above, I’m happy to help do that too.

Related posts

Syndicated copies to:

👓 A 2017 Nobel laureate says he left science because he ran out of money and was fed up with academia | QZ

Read A 2017 Nobel laureate left science because he ran out of money (Quartz)
Jeffrey Hall, a retired professor at Brandeis University, shared the 2017 Nobel Prize in medicine for discoveries elucidating how our internal body clock works. He was honored along with Michael Young and his close collaborator Michael Roshbash. Hall said in an interview from his home in rural Maine that he collaborated with Roshbash because they shared...

This is an all-too-often heard story. The difference is that now a Nobel Prize winner is telling it about himself!

Syndicated copies to:

📺 These 3D animations could help you finally understand molecular science | PBS NewsHour

Watched These 3D animations could help you finally understand molecular science from PBS NewsHour
Art and science have in some ways always overlapped, with early scientists using illustrations to depict what they saw under the microscope. Janet Iwasa of the University of Utah is trying to re-establish this link to make thorny scientific data and models approachable to the common eye. Iwasa offers her brief but spectacular take on how 3D animation can make molecular science more accessible.

Visualizations can be tremendously valuable. This story reminds me of an Intersession course that Mary Spiro did at Johns Hopkins to help researchers communicate what their research is about as well as some of the work she did with the Johns Hopkins Institute for NanoBioTechnology.

Syndicated copies to:

👓 Link: The futility of science communication conferences by John Hawks

Read Link: The futility of science communication conferences by John Hawks (johnhawks.net)
Rich Borschelt is the communication director for science at the Department of Energy, and recently attended a science communication workshop. He describes at some length his frustration at the failed model of science communication, in which every meeting hashes over the same futile set of assumptions: “Communication, Literacy, Policy: Thoughts on SciComm in a Democracy. After several other issues, he turns to the conferences’ attitude about scientists...

John’s note reminds me that I’ve been watching a growing and nasty trend against science, much less science communication, in the past several years. We’re going to be needing a lot more help than we’re getting lately to turn the tide for the better. Perhaps more scientists having their own websites and expanding on the practice of samizdat would help things out a bit?

I recently came across Science Sites, a non-profit web company, courtesy of mathematician Steven Strogatz who has a site built by them. In some sense, I see some of what they’re doing to be enabling scientists to become part of the IndieWeb. It would be great to see them support standards like Webmention or functionality like Micropub as well. (It looks like they’re doing a lot of building on SquareSpace, so by proxy it would be great if they were supporting these open standards.) I love that it seems to have been created by a group of science journalists to help out the cause.

As I watch some of the Domain of One’s Own community in higher education, it feels to me that it’s primarily full of humanities related professors and researchers and doesn’t seem to be doing enough outreach to their science, engineering, math, or other colleagues who desperately need these tools as well as help with basic communication.

Syndicated copies to:

📺 Scientific Studies: Last Week Tonight with John Oliver (HBO)

Watched Scientific Studies: Last Week Tonight from HBO
John Oliver discusses how and why media outlets so often report untrue or incomplete information as science.

This episode reminds me a bit about a short snippet I wrote in 2015 about the Evolution of a Scientific Journal Article Title (from Nature to TMZ)

Syndicated copies to:

Reply to Something the NIH can learn from NASA

Replied to Something the NIH can learn from NASA by Lior Pachter (& Comments by Donald Forsdyke)Lior Pachter (& Comments by Donald Forsdyke) (Bits of DNA)
Pubmed Commons provides a forum, independent of a journal, where comments on articles in that journal can be posted. Why not air your displeasure there? The article is easily found (see PMID: 27467019) and, so far, there are no comments.

I’m hoping that one day (in the very near future) that scientific journals and other science communications on the web will support the W3C’s Webmention candidate specification so that when commentators [like Lior, in this case, above] post something about an article on their site, that the full comment is sent to the original article to appear there automatically. This means that one needn’t go to the site directly to comment (and if the comment isn’t approved, then at least it still lives somewhere searchable on the web).

Some journals already count tweets, and blog mentions (generally for PR reasons) but typically don’t allow access to finding them on the web to see if they indicate positive or negative sentiment or to further the scientific conversation.

I’ve also run into cases in which scientific journals who are “moderating” comments, won’t approve reasoned thought, but will simultaneously allow (pre-approved?) accounts to flame every comment that is approved [example on Sciencemag.org: http://boffosocko.com/2016/04/29/some-thoughts-on-academic-publishing/ — see also comments there], so having the original comment live elsewhere may be useful and/or necessary depending on whether the publisher is a good or bad actor, or potentially just lazy.

I’ve also seen people use commenting layers like hypothes.is or genius.com to add commentary directly on journals, but these layers are often hidden to most. The community certainly needs a more robust commenting interface. I would hope that a decentralized version using web standards like Webmentions might be a worthwhile and robust solution.

Syndicated copies to:

Ten Simple Rules for Taking Advantage of Git and GitHub

Bookmarked Ten Simple Rules for Taking Advantage of Git and GitHub (journals.plos.org)
Bioinformatics is a broad discipline in which one common denominator is the need to produce and/or use software that can be applied to biological data in different contexts. To enable and ensure the replicability and traceability of scientific claims, it is essential that the scientific publication, the corresponding datasets, and the data analysis are made publicly available [1,2]. All software used for the analysis should be either carefully documented (e.g., for commercial software) or, better yet, openly shared and directly accessible to others [3,4]. The rise of openly available software and source code alongside concomitant collaborative development is facilitated by the existence of several code repository services such as SourceForge, Bitbucket, GitLab, and GitHub, among others. These resources are also essential for collaborative software projects because they enable the organization and sharing of programming tasks between different remote contributors. Here, we introduce the main features of GitHub, a popular web-based platform that offers a free and integrated environment for hosting the source code, documentation, and project-related web content for open-source projects. GitHub also offers paid plans for private repositories (see Box 1) for individuals and businesses as well as free plans including private repositories for research and educational use.
Syndicated copies to:

Some Thoughts on Academic Publishing and “Who’s downloading pirated papers? Everyone” from Science | AAAS

Bookmarked Who's downloading pirated papers? Everyone by John Bohannon (Science | AAAS)
An exclusive look at data from the controversial web site Sci-Hub reveals that the whole world, both poor and rich, is reading pirated research papers.

Sci Hub has been in the news quite a bit over the past half a year and the bookmarked article here gives some interesting statistics. I’ll preface some of the following editorial critique with the fact that I love John Bohannon’s work; I’m glad he’s spent the time to do the research he has. Most of the rest of the critique is aimed at the publishing industry itself.

From a journalistic standpoint, I find it disingenuous that the article didn’t actually hyperlink to Sci Hub. Neither did it link out (or provide a full quote) to Alicia Wise’s Twitter post(s) nor link to her rebuttal list of 20 ways to access their content freely or inexpensively. Of course both of these are editorial related, and perhaps the rebuttal was so flimsy as to be unworthy of a link from such an esteemed publication anyway.

Sadly, Elsevier’s list of 20 ways of free/inexpensive access doesn’t really provide any simple coverage for graduate students or researchers in poorer countries which are the likeliest group of people using Sci Hub, unless they’re going to fraudulently claim they’re part of a class which they’re not, and is this morally any better than the original theft method? It’s almost assuredly never used by patients, which seem to be covered under one of the options, as the option to do so is painfully undiscoverable past their typical $30/paper firewalls. Their patchwork hodgepodge of free access is so difficult to not only discern, but one must keep in mind that this is just one of dozens of publishers a researcher must navigate to find the one thing they’re looking for right now (not to mention the thousands of times they need to do this throughout a year, much less a career).

Consider this experiment, which could be a good follow up to the article: is it easier to find and download a paper by title/author/DOI via Sci Hub (a minute) versus through any of the other publishers’ platforms with a university subscription (several minutes) or without a subscription (an hour or more to days)? Just consider the time it would take to dig up every one of 30 references in an average journal article: maybe just a half an hour via Sci Hub versus the days and/or weeks it would take to jump through the multiple hoops to first discover, read about, and then gain access and then download them from the over 14 providers (and this presumes the others provide some type of “access” like Elsevier).

Those who lived through the Napster revolution in music will realize that the dead simplicity of their system is primarily what helped kill the music business compared to the ecosystem that exists now with easy access through the multiple streaming sites (Spotify, Pandora, etc.) or inexpensive paid options like (iTunes). If the publishing business doesn’t want to get completely killed, they’re going to need to create the iTunes of academia. I suspect they’ll have internal bean-counters watching the percentage of the total (now apparently 5%) and will probably only do something before it passes a much larger threshold, though I imagine that they’re really hoping that the number stays stable which signals that they’re not really concerned. They’re far more likely to continue to maintain their status quo practices.

Some of this ease-of-access argument is truly borne out by the statistics of open access papers which are downloaded by Sci Hub–it’s simply easier to both find and download them that way compared to traditional methods; there’s one simple pathway for both discovery and download. Surely the publishers, without colluding, could come up with a standardized method or protocol for finding and accessing their material cheaply and easily?

“Hart-Davidson obtained more than 100 years of biology papers the hard way—legally with the help of the publishers. ‘It took an entire year just to get permission,’ says Thomas Padilla, the MSU librarian who did the negotiating.” John Bohannon in Who’s downloading pirated papers? Everyone

Personally, I use use relatively advanced tools like LibX, which happens to be offered by my institution and which I feel isn’t very well known, and it still takes me longer to find and download a paper than it would via Sci Hub. God forbid if some enterprising hacker were to create a LibX community version for Sci Hub. Come to think of it, why haven’t any of the dozens of publishers built and supported simple tools like LibX which make their content easy to access? If we consider the analogy of academic papers to the introduction of machine guns in World War I, why should modern researchers still be using single-load rifles against an enemy that has access to nuclear weaponry?

My last thought here comes on the heels of the two tweets from Alicia Wise mentioned, but not shown in the article:

She mentions that the New York Times charges more than Elsevier does for a full subscription. This is tremendously disingenuous as Elsevier is but one of dozens of publishers for which one would have to subscribe to have access to the full panoply of material researchers are typically looking for. Further, Elsevier nor their competitors are making their material as easy to find and access as the New York Times does. Neither do they discount access to the point that they attempt to find the subscription point that their users find financially acceptable. Case in point: while I often read the New York Times, I rarely go over their monthly limit of articles to need any type of paid subscription. Solely because they made me an interesting offer to subscribe for 8 weeks for 99 cents, I took them up on it and renewed that deal for another subsequent 8 weeks. Not finding it worth the full $35/month price point I attempted to cancel. I had to cancel the subscription via phone, but why? The NYT customer rep made me no less than 5 different offers at ever decreasing price points–including the 99 cents for 8 weeks which I had been getting!!–to try to keep my subscription. Elsevier, nor any of their competitors has ever tried (much less so hard) to earn my business. (I’ll further posit that it’s because it’s easier to fleece at the institutional level with bulk negotiation, a model not too dissimilar to the textbook business pressuring professors on textbook adoption rather than trying to sell directly the end consumer–the student, which I’ve written about before.)

(Trigger alert: Apophasis to come) And none of this is to mention the quality control that is (or isn’t) put into the journals or papers themselves. Fortunately one need’t even go further than Bohannon’s other writings like Who’s Afraid of Peer Review? Then there are the hordes of articles on poor research design and misuse of statistical analysis and inability to repeat experiments. Not to give them any ideas, but lately it seems like Elsevier buying the Enquirer and charging $30 per article might not be a bad business decision. Maybe they just don’t want to play second-banana to TMZ?

Interestingly there’s a survey at the end of the article which indicates some additional sources of academic copyright infringement. I do have to wonder how the data for the survey will be used? There’s always the possibility that logged in users will be indicating they’re circumventing copyright and opening themselves up to litigation.

I also found the concept of using the massive data store as a means of applied corpus linguistics for science an entertaining proposition. This type of research could mean great things for science communication in general. I have heard of people attempting to do such meta-analysis to guide the purchase of potential intellectual property for patent trolling as well.

Finally, for those who haven’t done it (ever or recently), I’ll recommend that it’s certainly well worth their time and energy to attend one or more of the many 30-60 minute sessions most academic libraries offer at the beginning of their academic terms to train library users on research tools and methods. You’ll save yourself a huge amount of time.

Syndicated copies to:

Global Language Networks

Recent research on global language networks has interesting relations to big history, complexity economics, and current politics.

Yesterday I ran across this nice little video explaining some recent research on global language networks. It’s not only interesting in its own right, but is a fantastic example of science communication as well.

I’m interested in some of the information theoretic aspects of this as well as the relation of this to the area of corpus linguistics. I’m also curious if one could build worthwhile datasets like this for the ancient world (cross reference some of the sources I touch on in relation to the Dickinson College Commentaries within Latin Pedagogy and the Digital Humanities) to see what influences different language cultures have had on each other. Perhaps the historical record could help to validate some of the predictions made in relation to the future?

The paper “Global distribution and drivers of language extinction risk” indicates that of all the variables tested, economic growth was most strongly linked to language loss.

This research also has some interesting relation to the concept of “Collective Learning” within the realm of a Big History framework via David Christian, Fred Spier, et al.  I’m curious to revisit my hypothesis: Collective learning has potentially been growing at the expense of a shrinking body of diverse language some of which was informed by the work of Jared Diamond.

Some of the discussion in the video is reminiscent to me of some of the work Stuart Kauffman lays out in At Home in the Universe: The Search for the Laws of Self-Organization and Complexity (Oxford, 1995). Particularly in chapter 3 in which Kauffman discusses the networks of life.  The analogy of this to the networks of language here indicate to me that some of Cesar Hidalgo’s recent work in Why Information Grows: The Evolution of Order, From Atoms to Economies (MIT Press, 2015) is even more interesting in helping to show the true value of links between people and firms (information sources which he measures as personbytes and firmbytes) within economies.

Finally, I can also only think about how this research may help to temper some of the xenophobic discussion that occurs in American political life with respect to fears relating to Mexican immigration issues as well as the position of China in the world economy.

Those intrigued by the video may find the website set up by the researchers very interesting. It contains links to the full paper as well as visualizations and links to the data used.

Abstract

Languages vary enormously in global importance because of historical, demographic, political, and technological forces. However, beyond simple measures of population and economic power, there has been no rigorous quantitative way to define the global influence of languages. Here we use the structure of the networks connecting multilingual speakers and translated texts, as expressed in book translations, multiple language editions of Wikipedia, and Twitter, to provide a concept of language importance that goes beyond simple economic or demographic measures. We find that the structure of these three global language networks (GLNs) is centered on English as a global hub and around a handful of intermediate hub languages, which include Spanish, German, French, Russian, Portuguese, and Chinese. We validate the measure of a language’s centrality in the three GLNs by showing that it exhibits a strong correlation with two independent measures of the number of famous people born in the countries associated with that language. These results suggest that the position of a language in the GLN contributes to the visibility of its speakers and the global popularity of the cultural content they produce.

Citation: Ronen S, Goncalves B, Hu KZ, Vespignani A, Pinker S, Hidalgo CA
Links that speak: the global language network and its association with global fame, Proceedings of the National Academy of Sciences (PNAS) (2014), 10.1073/pnas.1410931111

Related posts:

“A language like Dutch — spoken by 27 million people — can be a disproportionately large conduit, compared with a language like Arabic, which has a whopping 530 million native and second-language speakers,” Science reports. “This is because the Dutch are very multilingual and very online.”

Syndicated copies to:

Uri Alon: Why Truly Innovative Science Demands a Leap into the Unknown

I recently ran across this TED talk and felt compelled to share it. It really highlights some of my own personal thoughts on how science should be taught and done in the modern world.  It also overlaps much of the reading I’ve been doing lately on innovation and creativity. If these don’t get you to watch, then perhaps mentioning that Alon manages to apply comedy and improvisation techniques to science will.

Uri Alon was already one of my scientific heroes, but this adds a lovely garnish.