Maybe I should have used Claude Shannon instead?
I’m hoping that one day (in the very near future) that scientific journals and other science communications on the web will support the W3C’s Webmention candidate specification so that when commentators [like Lior, in this case, above] post something about an article on their site, that the full comment is sent to the original article to appear there automatically. This means that one needn’t go to the site directly to comment (and if the comment isn’t approved, then at least it still lives somewhere searchable on the web).
Some journals already count tweets, and blog mentions (generally for PR reasons) but typically don’t allow access to finding them on the web to see if they indicate positive or negative sentiment or to further the scientific conversation.
I’ve also run into cases in which scientific journals who are “moderating” comments, won’t approve reasoned thought, but will simultaneously allow (pre-approved?) accounts to flame every comment that is approved [example on Sciencemag.org: http://boffosocko.com/2016/04/29/some-thoughts-on-academic-publishing/ — see also comments there], so having the original comment live elsewhere may be useful and/or necessary depending on whether the publisher is a good or bad actor, or potentially just lazy.
I’ve also seen people use commenting layers like hypothes.is or genius.com to add commentary directly on journals, but these layers are often hidden to most. The community certainly needs a more robust commenting interface. I would hope that a decentralized version using web standards like Webmentions might be a worthwhile and robust solution.
Does blogging need to be different than it was?
agree with John that blogs seemingly occupy a different space in online life today than they did a decade ago, but I won’t concede that, for me at least, most of it has moved to the social media silos.
I think the role of the blog is different than it was even just a couple of years ago. It’s not the sole outpost of an online life, although it can be an anchor, holding it in place. — John Scalzi
Why? About two years ago I began delving into the evolving movement known as IndieWeb, which has re-empowered me to take back my web presence and use my own blog/website as my primary online hub and identity. The tools I’ve found there allow me to not only post everything to my own site first and then syndicate it out to the social circles and sites I feel it might resonate with, but best of all, the majority of the activity (comments, likes, shares, etc.) on those sites boomerangs back to the comments on my own site! This gives me a better grasp on where others are interacting with my content, and I can interact along with them on the platforms that they choose to use.
Some of the benefit is certainly a data ownership question — for who is left holding the bag if a major site like Twitter or Facebook is bought out or shut down? This has happened to me in dozens of cases over the past decade where I’ve put lots of content and thought into a site only to see it shuttered and have all of my data and community disappear with it.
Other benefits include: cutting down on notification clutter, more enriching interactions, and less time wasted scrolling through social sites.
Reply from my own site
Now I’m able to use my own site to write a comment on John’s post (where the comments are currently technically closed), and keep it for myself, even if his blog should go down one day. I can alternately ping his presence on other social media (say, by means of Twitter) so he’ll be aware of the continued conversational ripples he’s caused.
Social media has become ubiquitous in large part because those corporate sites are dead simple for Harry and Mary Beercan to use. Even my own mother’s primary online presence begins with http://facebook.com/. But not so for me. I’ve taken the reigns of my online life back.
My Own Hub
My blog remains my primary online hub, and some very simple IndieWeb tools enable it by bringing all the conversation back to me. I joined Facebook over a decade ago, and you’ll notice by the date on the photo that it didn’t take me long to complain about the growing and overwhelming social media problem I had.
I’m glad I can finally be at the center of my own social graph, and it was everything I thought it could be.
This morning I ran across a tweet from colleague Andrew Eckford:
— Andrew Eckford (@andreweckford) April 12, 2016
His response was probably innocuous enough, but I thought the article should be put to task a bit more.
“35 million academics, independent scholars and graduate students as users, who collectively have uploaded some eight million texts”
35 million users is an okay number, but their engagement must be spectacularly bad if only 8 million texts are available. How many researchers do you know who’ve published only a quarter of an article anywhere, much less gotten tenure?
“the platform essentially bans access for academics who, for whatever reason, don’t have an Academia.edu account. It also shuts out non-academics.”
They must have changed this, as pretty much anyone with an email address (including non-academics) can create a free account and use the system. I’m fairly certain that the platform was always open to the public from the start, but the article doesn’t seem to question the statement at all. If we want to argue about shutting out non-academics or even academics in poorer countries, let’s instead take a look at “big publishing” and their $30+/paper paywalls and publishing models, shall we?
“I don’t trust academia.edu”
Given his following discussion, I can only imagine what he thinks of big publishers in academia and that debate.
“McGill’s Dr. Sterne calls it “the gamification of research,”
Most research is too expensive to really gamify in such a simple manner. Many researchers are publishing to either get or keep their jobs and don’t have much time, information, or knowledge to try to game their reach in these ways. And if anything, the institutionalization of “publish or perish” has already accomplished far more “gamification”, Academia.edu is just helping to increase the reach of the publication. Given that research shows that most published research isn’t even read, much less cited, how bad can Academia.edu really be? [Cross reference: Reframing What Academic Freedom Means in the Digital Age]
If we look at Twitter and the blogging world as an analogy with Academia.edu and researchers, Twitter had a huge ramp up starting in 2008 and helped bloggers obtain eyeballs/readers, but where is it now? Twitter, even with a reasonable business plan is stagnant with growing grumblings that it may be failing. I suspect that without significant changes that Academia.edu (which is a much smaller niche audience than Twitter) will also eventually fall by the wayside.
The article rails against not knowing what the business model is or what’s happening with the data. I suspect that the platform itself doesn’t have a very solid business plan and they don’t know what to do with the data themselves except tout the numbers. I’d suspect they’re trying to build “critical mass” so that they can cash out by selling to one of the big publishers like Elsevier, who might actually be able to use such data. But this presupposes that they’re generating enough data; my guess is that they’re not. And on that subject, from a journalistic viewpoint, where’s the comparison to the rest of the competition including ResearchGate.net or Mendeley.com, which in fact was purchased by Elsevier? As it stands, this simply looks like a “hit piece” on Academia.edu, and sadly not a very well researched or reasoned one.
In sum, the article sounds to me like a bunch of Luddites running around yelling “fire”, particularly when I’d imagine that most referred to in the piece feed into the more corporate side of publishing in major journals rather than publishing it themselves on their own websites. I’d further suspect they’re probably not even practicing academic samizdat. It feels to me like the author and some of those quoted aren’t actively participating in the social media space to be able to comment on it intelligently. If the paper wants to pick at the academy in this manner, why don’t they write an exposé on the fact that most academics still have websites that look like they’re from 1995 (if, in fact, they have anything beyond their University’s mandated business card placeholder) when there are a wealth of free and simple tools they could use? Let’s at least build a cart before we start whipping the horse.
For academics who really want to spend some time and thought on a potential solution to all of this, I’ll suggest that they start out by owning their own domain and own their own data and work. The #IndieWeb movement certainly has an interesting philosophy that’s a great start in fixing the problem; it can be found at http://www.indiewebcamp.com.
There aren’t a lot of available online lectures on the subject of information theory, but here are the ones I’m currently aware of:
- Brit Cruise (Khan Academy) Informtion Theory
- Seth Lloyd (Complexity Explorer/YouTube) Introduction to Information Theory
- Thomas Cover (Stanford | YouTube) Information Theory
- Raymond Yeung (Chinese University of Hong Kong | Coursera) Information Theory (May require account to see 3 or more archived versions)
- David MacKay (University of Cambridge) Information Theory, Inference, and Learning Algorithms
- Andrew Eckford (York University | YouTube) Coding and Information Theory
- S.N. Merchant (IIT Bombay | NPTEL :: Electronics & Communication Engineering) Introduction to Information Theory and Coding
Fortunately, most are pretty reasonable, though vary in their coverage of topics. The introductory lectures don’t require as much mathematics and can probably be understood by those at the high school level with just a small amount of basic probability theory and an understanding of the logarithm.
The top three in the advanced section (they generally presume a prior undergraduate level class in probability theory and some amount of mathematical sophistication) are from professors who’ve written some of the most commonly used college textbooks on the subject. If I recall a first edition of the Yeung text was available via download through his course interface. MacKay’s text is available for free download from his site as well.
Feel free to post other video lectures or resources you may be aware of in the comments below.
Editor’s Update: With sadness, I’ll note that David MacKay died just days after this was originally posted.
There are potential solutions to the recent News Genius-gate incident, and simple notifications can go a long way toward helping prevent online bullying behavior.
There has been a recent brouhaha on the Internet (see related stories below) because of bad actors using News Genius (and potentially other web-based annotation tools like Hypothes.is) to comment on websites without their owner’s knowledge, consent, or permission. It’s essentially the internet version of talking behind someone’s back, but doing it while standing on their head and shouting with your fingers in their ears. Because of platform and network effects, such rude and potentially inappropriate commentary can have much greater reach than even the initial website could give it. Naturally in polite society, such bullying behavior should be curtailed.
This type of behavior is also not too different from more subtle concepts like subtweets or the broader issues platforms like Twitter are facing in which they don’t have proper tools to prevent abuse and bullying online.
A creator receives no notification if someone has annotated their content.–Ella Dawson
Towards a Solution: Basic Awareness
I think that a major part of improving the issue of abuse and providing consent is building in notifications so that website owners will at least be aware that their site is being marked up, highlighted, annotated, and commented on in other locations or by other platforms. Then the site owner at least has the knowledge of what’s happening and can then be potentially provided with information and tools to allow/disallow such interactions, particularly if they can block individual bad actors, but still support positive additions, thought, and communication. Ideally this blocking wouldn’t occur site-wide, which many may be tempted to do now as a knee-jerk reaction to recent events, but would be fine grained enough to filter out the worst offenders.
Toward the end of notifications to site owners, it would be great if any annotating activity would trigger trackbacks, pingbacks, or the relatively newer and better webmention protocol of the W3C which comes out of the IndieWeb movement. Then site owners would at least have notifications about what is happening on their site that might otherwise be invisible to them. (And for the record, how awesome would it be if social media silos like Facebook, Twitter, Instagram, Google+, Medium, Tumblr, et al would support webmentions too!?!)
Perhaps there’s a way to further implement filters or tools (a la Akismet on platforms like WordPress) that allow site users to mark materials as spam, abusive, or “other” so that they are then potentially moved from “public” facing to “private” so that the original highlighter can still see their notes, but that the platform isn’t allowing the person’s own website to act as a platform to give safe harbor (or reach) to bad actors.
Further some site owners might appreciate gradable filters (G, PG, PG-13, R, X) so that either they or their users (or even parents of younger children) can filter what they’re willing to show on their site (or that their users can choose to see).
Consider also annotations on narrative forms that might be posted as spoilers–how can these be guarded against? For what happens when a even a well-meaning actor posts an annotation on page two which foreshadows that the butler did it thereby ruining the surprise on the last page? Certainly there’s some value in having such a comment from an academic/literary perspective, but it doesn’t mean that future readers will necessarily appreciate the spoiler. (Some CSS and a spoiler tag might easily and unobtrusively remedy the situation here?)
Certainly options can be built into the annotating platform itself as well as allowing server-side options for personal websites attempting to deal with flagrant violators and truly hard-to-eradicate cases.
Do you have a solution for helping to harden the Internet against bullies? Share it in the comments below.
- Genius Wants To Let Readers Annotate Any News Article. What Could Possibly Go Wrong? by Jessica Goldstein, ThinkProgress 2016-03-30
- Genius responds to Congresswoman Katherine Clark’s letter on preventing abuse by Noah Kulwin, Re/code 2016-03-29
- Misguided Genius by Chelsea Hassler, Slate 2016-03-28
- The Genius Problem by Chuq Von Rospach 2016-03-28
- Genius Web Annotator vs. One Young Woman With a Blog by Brady Dale, The Observer 2016-03-28
I agree wholeheartedly with Adam, though I don’t think I’d really seen any small issues except perhaps for an odd CSS issue in formatting an <h2> tag somewhere. (Note: This comment applies to v1.2.3 of Academica as on 4/2/15, the theme publisher made a DRASTIC change to the theme, so take caution in upgrading!!)
I have created a child-theme with one or two small customizations (slightly larger headings in side widgets and some color/text size changes), but otherwise have v1.2.3 working as perfectly as it was intended to. This includes the slideshow functionality on the homepage. See BoffoSocko as an example.
For those, perhaps including Adam, wanting to get the slider to work properly:
- Go to your WP Dashboard hover on the menu tab “appearance” and click on “customize”
- On the “Featured Content” tab, enter a tag you want to use to feature content on the homepage of your site. (In my case, I chose “featured” and also clicked “Hide tag from displaying in post meta and tag clouds”.)
- Go to one or more posts (I think it works on up to 10 featurable posts) and tag them with the word you just used in the featured content setting (in my case “featured”
- Next be sure to actually set a “Featured photo” for the post–930×300 pixels is the optimal photo size if I recall.
- Now when you visit your home page, the slider should work properly and include arrows to scroll through them (these aren’t as obvious on featured photos with white backgrounds).
- Note that on individual pages, you’ll still have static header image(s) which are also customizable in the “customize” section of the WP dashboard, which was mentioned in step 1.
I hope this helps.
@DuttonBooks What?! No appearances in his own back yard in Los Angeles? Let’s fix this…
Any intention of acquiring the new text Bibliotheca Fictiva by Freedman as well? http://www.quaritch.com/wp-content/uploads/sites/23/2014/09/Bibliotheca-Fictiva.pdf
I’m not seeing it available on Amazon yet…
[My comments posted to the original Facebook post follow below.]
I’m coming to this post a bit late as I’m playing a bit of catch up, but agree with it wholeheartedly.
In particular, applications to molecular biology and medicine are really beginning to come to a heavy boil in just the past five years. This particular year is the progenitor of what appears to be the biggest renaissance for the application of information theory to the area of biology since Hubert Yockey, Henry Quastler, and Robert L. Platzman’s “Symposium on Information Theory in Biology at Gatlinburg, Tennessee” in 1956.
Upcoming/recent conferences/workshops on information theory in biology include:
- BIRS Workshop: Biological and Bio-Inspired Information Theory
- Entropy and Information in Biological Systems at NIMBios
- CECAM Workshop: Entropy in Biomolecular Systems
- ALife breakout session on Information Theoretic Incentives for Artificial Life (which will also spawn off a special issue of the journal Entropy):
At the beginning of September, Christoph Adami posted an awesome and very sound paper on arXiv entitled “Information-theoretic considerations concerning the origin of life” which truly portends to turn the science of the origin of life on its head.
I’ll note in passing, for those interested, that Claude Shannon’s infamous master’s thesis at MIT (in which he applied Boolean Algebra to electric circuits allowing the digital revolution to occur) and his subsequent “The Theory of Mathematical Communication” were so revolutionary, nearly everyone forgets his MIT Ph.D. Thesis “An Algebra for Theoretical Genetics” which presaged the areas of cybernetics and the current applications of information theory to microbiology and are probably as seminal as Sir R.A Fisher’s applications of statistics to science in general and biology in particular.
For those commenting on the post who were interested in a layman’s introduction to information theory, I recommend John Robinson Pierce’s An Introduction to Information Theory: Symbols, Signals and Noise (Dover has a very inexpensive edition.) After this, one should take a look at Claude Shannon’s original paper. (The MIT Press printing includes some excellent overview by Warren Weaver along with the paper itself.) The mathematics in the paper really aren’t too technical, and most of it should be comprehensible by most advanced high school students.
For those that don’t understand the concept of entropy, I HIGHLY recommend Arieh Ben-Naim’s book Entropy Demystified The Second Law Reduced to Plain Common Sense with Seven Simulated Games. He really does tear the concept down into its most basic form in a way I haven’t seen others come remotely close to and which even my mother can comprehend (with no mathematics at all). (I recommend this presentation to even those with Ph.D.’s in physics because it is so truly fundamental.)
For the more advanced mathematicians, physicists, and engineers Arieh Ben-Naim does a truly spectacular job of extending ET Jaynes’ work on information theory and statistical mechanics and comes up with a more coherent mathematical theory to conjoin the entropy of physics/statistical mechanics with that of Shannon’s information theory in A Farewell to Entropy: Statistical Thermodynamics Based on Information.
For the advanced readers/researchers interested in more at the intersection of information theory and biology, I’ll also mention that I maintain a list of references, books, and journal articles in a Mendeley group entitled “ITBio: Information Theory, Microbiology, Evolution, and Complexity.”
Adeline, Path might be a reasonable tool for accomplishing what you’d like, but it’s original design is as a very small and incredibly personal social networking tool and therefore not the best thing for your particular use case here. Toward that end, it’s personalization ability to limit who sees what is highly unlikely to change as they limit your “friends” to less than your Dunbar number in the first place. Their presupposition is that you’re only sharing things with your VERY closest friends.
For more functionality in the vein you’re looking at, you might consider some of the Google tools which will allow you much more granularity in terms of sharing, tracking, and geotagging. First I’d recommend using Google Latitude which will use your cell phone GPS to constantly track your location at all times if you wish of the ability to turn it on and off at will. This will allow you to go back and see exactly where you were on any given day you were sending them data. (It’s also been useful a few times when I’ve lost/left my phone while out of the house or in others’ cars and I can log in online to see exactly where my phone is right now.) Latitude will also allow you to share your physical location with others you designate as well as to export portions of data sets for later use/sharing.)
Unbeknownst to many, most cell phones and increasingly many cameras will utilize GPS chips or wifi to geolocate your photo and include it in the EXIF data imbedded into the “digital fingerprint” of your photo (along with the resolution, date, time, what type of camera took the photo, etc.) For this reason, many privacy experts suggest you remove/edit your exif data prior to posting your photos to public facing social media sites as it can reveal the location of your personal home, office, etc which you may not mean to share with the world.) There are a number of tools you can find online for viewing or editing your exif data.
You can then upload those photos to Google Plus which will allow you to limit your sharing of posts to whichever groups of people you’d prefer with a high degree of granularity, including using email addresses for people who aren’t already on the service. (They actually have a clever back up option that, if selected, will allow your phone to automatically upload all your photos to G+ in the background and making them private to you only for sharing at a later date if you choose.) I’m sure that with very little work, you can find some online tools (including even Google Maps perhaps) that will allow you to upload photos and have them appear on mapping software. (Think about the recent upgrade in Craigslist that takes posting data and maps it out onto the Openstreetmap.org platform).
Finally, as part of Google’s Data Liberation initiative you can go in and export all of your data for nearly all of their services including Latitude and from Picasa for photos.I think that playing around with these interlocking Google tools will give you exactly the type of functionality (and perhaps a little more than) you’re looking for.
Their user interface may not be quite as beautiful and slick as Path and may take half an hour of playing with to explore and configure your workflow exactly the way you want to use it, but I think it will give you a better data set with a higher degree of sharing granularity. (Alternately, you could always develop your own “app” for doing this as there are enough open API’s for many of these functions from a variety of service providers, but that’s another story for another time.)
To take it a step further is there an easy way to integrate it into other social tools like Instapaper, ReadItLater, et al or to have the full journal article results emailed to my Kindle’s address so that the papers all show up for instant reading on my Kindle, tablet, e-reader?!
Thanks for the tip Ellen!
I hope that some discusses LibX in some of these presentations. It’s my favorite new research tool!
This is a great short article on bioengineering and synthetic biology written for the layperson. It’s also one of the best crash courses I’ve read on genetics in a while.