Bioinformatics is a broad discipline in which one common denominator is the need to produce and/or use software that can be applied to biological data in different contexts. To enable and ensure the replicability and traceability of scientific claims, it is essential that the scientific publication, the corresponding datasets, and the data analysis are made publicly available [1,2]. All software used for the analysis should be either carefully documented (e.g., for commercial software) or, better yet, openly shared and directly accessible to others [3,4]. The rise of openly available software and source code alongside concomitant collaborative development is facilitated by the existence of several code repository services such as SourceForge, Bitbucket, GitLab, and GitHub, among others. These resources are also essential for collaborative software projects because they enable the organization and sharing of programming tasks between different remote contributors. Here, we introduce the main features of GitHub, a popular web-based platform that offers a free and integrated environment for hosting the source code, documentation, and project-related web content for open-source projects. GitHub also offers paid plans for private repositories (see Box 1) for individuals and businesses as well as free plans including private repositories for research and educational use.
Tonight was the beginning of a new group of indiewebbers meeting up on the East side of the Los Angeles Area, in what we hope to be an ongoing in-person effort, particularly as we get nearer to IndieWeb Camp Los Angeles in November.
We met at Starbucks, 575 South Lake Avenue, Pasadena, CA.
Quiet Writing Hour
The quiet writing hour started off pretty well with three people which quickly grew to 6 at the official start of the meeting including what may be the youngest participants ever (at 6months and 5 1/2 years old).
— ChrisAldrich (@ChrisAldrich) July 28, 2016
Introductions and Quick Demonstrations
- Chris Aldrich
- Angelo Gladding
- Bryan Cole, a retired photographer
- Jervey Tervalon, a writer, and his 6 month old daughter
- Evie (5 years old, private site)
Following introductions, I did a quick demo of the simple workflow I’ve been slowly perfecting for liking/retweeting posts from Twitter via mobile so that they post on my own site while simultaneously POSSEing to Twitter. Angelo showed a bit of his code and set-up for his custom-built site based on a Python framework and inspired by Aaron Schwartz’s early efforts. (He also has an interesting script for scraping other’s sites searching for microformats data with a mf2 parser that I’d personally like to see more of and hope he’ll open source it. It found a few issues with some redundant/malformed rel=”me” links in the header of my own site that I’ll need to sort out shortly).
Bryan showed some recent work he’s done on his photography blog, which he’s slowly but surely been managing to cobble together from a self-hosted version of WordPress with help from friends and the local WordPress Meetup. (Big kudos to him for his sheer tenacity in building his site up!) Jervey described some of what he’d like to build as it relates to a WordPress based site he’s putting together for a literary journal, while his daughter slept peacefully until someone mentioned a silo named Facebook. 5 year old Evie showed off some coding work she’d done during the quiet writing hour on the Scratch Platform on iOS that she hopes to post to her own blog shortly, so she can share with her grandparents.
At the break, we managed to squeeze everyone in for a group selfie.
Peer-to-Peer Building and Help
Since many in the group were building with WordPress, we did a demo build on Evie’s (private) site by installing the IndieWeb Plugin and activating and configuring a few of the basic sub-plugins. We then built a small social links menu to demonstrate the ease of adding rel-me to an Instagram link as an example. We also showed a quick example of IndieAuth, followed by a quick build for doing PESOS from Instagram with proper microformats2 markup. Bryan had a few questions about his site from the first half of the meeting, so we wrapped up by working our way through a portion of those so he can proceed with some additional work before our next meeting.
Summary & Next Meeting
In all, not a bad showing for what I expected to be a group of 5 less people than what we ultimately got! I can’t wait until the next meetup on either 8/10 or 8/24 (at the very worst) pending some scheduling. I hope to do every two weeks, but we’ll definitely commit to do at least once a month going forward.
Including Time Out, Time Further Out, Time Changes, Countdown: Time in Outer Space, and Time In, this series of albums commonly known as the Time Series from Dave Brubeck and the Dave Brubeck Quartet is a masterclass in how important time is in music as well as how it can evolve.
Here you’ll find Brubeck experimenting with time signatures including recordings of “Take Five” in 5/4 time, “Pick Up Sticks” in 6/4, “Unsquare Dance” in 7/4, “World’s Fair” in 13/4, and “Blue Rondo à la Turk” in 9/8.
This is a great way to spend the day/night when you have some active listening time.
Should I be adding major media outlets to my Facebook feed as family members? Changes by Facebook, which are highlighted in, may mean this is coming: The Atlantic can be my twin brother, and Foreign Affairs could be my other sister.
“News content posted by publishers will show up less prominently, resulting in less traffic to companies that have come to rely on Facebook audiences.” — Facebook to Change News Feed to Focus on Friends and Family in New York Times
After reading this article, I can only think that Facebook wrongly thinks that my family is so interesting (and believe me, I don’t think I’m any better, most of my posts–much like my face–are ones which only a mother could “like”/”love” and my feed will bear that out! BTW I love you mom.) The majority of posts I see there are rehashes of so-called “news” sites I really don’t care about or invitations to participate in games like Candy Crush Saga.
While I love keeping up with friends and family on Facebook, I’ve had to very heavily modify how I organize my Facebook feed to get what I want out of it because the algorithms don’t always do a very good job. Sadly, I’m probably in the top 0.0001% of people who take advantage of any of these features.
It really kills me that although publishers see quite a lot of traffic from social media silos (and particularly Facebook), they’re still losing some sight of the power of owning your own website and posting there directly. Apparently the past history littered with examples like Zynga and social reader tools hasn’t taught them the lesson to continue to iterate on their own platforms. One day the rug will be completely pulled out from underneath them and real trouble will result. They’ll wish they’d put all their work and effort into improving their own product rather than allowing Facebook, Twitter, et al. to siphon off a lot of their resources. If there’s one lesson that we’ve learned from media over the years, it’s that owning your own means of distribution is a major key to success. Sharecropping one’s content out to social platforms is probably not a good idea while under pressure to change for the future.
Psst… With all this in mind, if you’re a family member or close friend who wants to
- have your own website;
- own your own personal data (which you can automatically syndicate to most of the common social media sites); and
- be in better control of your online identity,
I’ll offer to build you a simple one and host it at cost.
Advances in computing power, natural language processing, and digitization of text now make it possible to study our a culture's evolution through its texts using a "big data" lens. Our ability to communicate relies in part upon a shared emotional experience, with stories often following distinct emotional trajectories, forming patterns that are meaningful to us. Here, by classifying the emotional arcs for a filtered subset of 1,737 stories from Project Gutenberg's fiction collection, we find a set of six core trajectories which form the building blocks of complex narratives. We strengthen our findings by separately applying optimization, linear decomposition, supervised learning, and unsupervised learning. For each of these six core emotional arcs, we examine the closest characteristic stories in publication today and find that particular emotional arcs enjoy greater success, as measured by downloads.
Instagram filter used: Normal
Photo taken at: Johns Hopkins University
Even in 2016, publishers and authors are still struggling when it comes to re-releasing decades-old books, but Penguin had a unique problem when it set out to publish a 30th anniversary edition of Richard Dawkin's The Blind Watchmaker.<br /><br />The Bookseller reports that Penguin decided to revive four programs Dawkins wrote in 1986. Written in Pascal for the Mac, The Watchmaker Suite was an experiment in algorithmic evolution. Users could run the programs and create a biomorph, and then watch it evolve across the generations.<br /><br />And now you can do the same in your web browser.<br /><br />A website, MountImprobable.com, was built by the publisher’s in-house Creative Technology team—comprising community manager Claudia Toia, creative developer Mathieu Triay and cover designer Matthew Young—who resuscitated and redeployed code Dawkins wrote in the 1980s and ’90s to enable users to create unique, “evolutionary” imprints. The images will be used as cover imagery on Dawkins’ trio to grant users an entirely individual, personalised print copy.
As a researcher, I fully appreciate the pro-commonplace book conceptualization of the first post, and the second takes things amazingly further with a plugin that allows one to easily display one’s hypothes.is annotations on one’s own WordPress-based site in a dead-simple fashion.
This functionality is a great first step, though honestly, in keeping with IndieWeb principles of owning one’s own data, I think it would be easier/better if Hypothes.is both accepted and sent webmentions. This would potentially allow me to physically own the data on my own site while still participating in the larger annotation community as well as give me notifications when someone either comments or augments on one of my annotations or even annotates one of my own pages (bits of which I’ve written about before.)
Either way, kudos to Kris Shaffer for moving the ball forward!
My Hypothes.is Notebook
My IndieWeb annotations
I can also easily embed my recent annotations about the IndieWeb below:
[ hypothesis user = 'chrisaldrich' tags = 'indieweb']
Part of my plans to (remotely) devote the weekend to the IndieWeb Summit in Portland were hijacked by the passing of Muhammad Ali. Wait… What?! How does that happen?
A year ago, I opened started a publishing company and we came out with our first book Amerikan Krazy in late February. The author has a small backcatalogue that’s out of print, so in conjunction with his book launch, we’ve been slowly releasing ebook versions of his old titles. Coincidentally one of them was a fantastic little book about Ali entitled Muhammad Ali Retrospective, so I dropped everything I was doing to get it finished up and out as a quick way of honoring his passing.
But while I was working on some of the minutiae, I’ve been thinking in the back of my mind about the ideas of marginalia, commonplace books, and Amazon’s siloed community of highlights and notes. Is there a decentralized web-based way of creating a construct similar to webmention that will allow all readers worldwide to highlight, mark up and comment across electronic versions of texts so that they can share them in an open manner while still owning all of their own data? And possibly a way to aggregate them at the top for big data studies in the vein of corpus linguistics?
I think there is…
However it’ll take some effort, but effort that could have a worthwhile impact.
I have a few potential architectures in mind, but also want to keep online versions of books in the loop as well as potentially efforts like hypothes.is or even the academic portions of Genius.com which do web-based annotation.
If anyone in the IndieWeb, books, or online marginalia worlds has thought about this as well, I’d love to chat.
An exclusive look at data from the controversial web site Sci-Hub reveals that the whole world, both poor and rich, is reading pirated research papers.
Sci Hub has been in the news quite a bit over the past half a year and the bookmarked article here gives some interesting statistics. I’ll preface some of the following editorial critique with the fact that I love John Bohannon’s work; I’m glad he’s spent the time to do the research he has. Most of the rest of the critique is aimed at the publishing industry itself.
From a journalistic standpoint, I find it disingenuous that the article didn’t actually hyperlink to Sci Hub. Neither did it link out (or provide a full quote) to Alicia Wise’s Twitter post(s) nor link to her rebuttal list of 20 ways to access their content freely or inexpensively. Of course both of these are editorial related, and perhaps the rebuttal was so flimsy as to be unworthy of a link from such an esteemed publication anyway.
Sadly, Elsevier’s list of 20 ways of free/inexpensive access doesn’t really provide any simple coverage for graduate students or researchers in poorer countries which are the likeliest group of people using Sci Hub, unless they’re going to fraudulently claim they’re part of a class which they’re not, and is this morally any better than the original theft method? It’s almost assuredly never used by patients, which seem to be covered under one of the options, as the option to do so is painfully undiscoverable past their typical $30/paper firewalls. Their patchwork hodgepodge of free access is so difficult to not only discern, but one must keep in mind that this is just one of dozens of publishers a researcher must navigate to find the one thing they’re looking for right now (not to mention the thousands of times they need to do this throughout a year, much less a career).
Consider this experiment, which could be a good follow up to the article: is it easier to find and download a paper by title/author/DOI via Sci Hub (a minute) versus through any of the other publishers’ platforms with a university subscription (several minutes) or without a subscription (an hour or more to days)? Just consider the time it would take to dig up every one of 30 references in an average journal article: maybe just a half an hour via Sci Hub versus the days and/or weeks it would take to jump through the multiple hoops to first discover, read about, and then gain access and then download them from the over 14 providers (and this presumes the others provide some type of “access” like Elsevier).
Those who lived through the Napster revolution in music will realize that the dead simplicity of their system is primarily what helped kill the music business compared to the ecosystem that exists now with easy access through the multiple streaming sites (Spotify, Pandora, etc.) or inexpensive paid options like (iTunes). If the publishing business doesn’t want to get completely killed, they’re going to need to create the iTunes of academia. I suspect they’ll have internal bean-counters watching the percentage of the total (now apparently 5%) and will probably only do something before it passes a much larger threshold, though I imagine that they’re really hoping that the number stays stable which signals that they’re not really concerned. They’re far more likely to continue to maintain their status quo practices.
Some of this ease-of-access argument is truly borne out by the statistics of open access papers which are downloaded by Sci Hub–it’s simply easier to both find and download them that way compared to traditional methods; there’s one simple pathway for both discovery and download. Surely the publishers, without colluding, could come up with a standardized method or protocol for finding and accessing their material cheaply and easily?
“Hart-Davidson obtained more than 100 years of biology papers the hard way—legally with the help of the publishers. ‘It took an entire year just to get permission,’ says Thomas Padilla, the MSU librarian who did the negotiating.” John Bohannon in Who’s downloading pirated papers? Everyone
Personally, I use use relatively advanced tools like LibX, which happens to be offered by my institution and which I feel isn’t very well known, and it still takes me longer to find and download a paper than it would via Sci Hub. God forbid if some enterprising hacker were to create a LibX community version for Sci Hub. Come to think of it, why haven’t any of the dozens of publishers built and supported simple tools like LibX which make their content easy to access? If we consider the analogy of academic papers to the introduction of machine guns in World War I, why should modern researchers still be using single-load rifles against an enemy that has access to nuclear weaponry?
My last thought here comes on the heels of the two tweets from Alicia Wise mentioned, but not shown in the article:
— Alicia Wise (@wisealic) March 14, 2016
— Alicia Wise (@wisealic) March 14, 2016
She mentions that the New York Times charges more than Elsevier does for a full subscription. This is tremendously disingenuous as Elsevier is but one of dozens of publishers for which one would have to subscribe to have access to the full panoply of material researchers are typically looking for. Further, Elsevier nor their competitors are making their material as easy to find and access as the New York Times does. Neither do they discount access to the point that they attempt to find the subscription point that their users find financially acceptable. Case in point: while I often read the New York Times, I rarely go over their monthly limit of articles to need any type of paid subscription. Solely because they made me an interesting offer to subscribe for 8 weeks for 99 cents, I took them up on it and renewed that deal for another subsequent 8 weeks. Not finding it worth the full $35/month price point I attempted to cancel. I had to cancel the subscription via phone, but why? The NYT customer rep made me no less than 5 different offers at ever decreasing price points–including the 99 cents for 8 weeks which I had been getting!!–to try to keep my subscription. Elsevier, nor any of their competitors has ever tried (much less so hard) to earn my business. (I’ll further posit that it’s because it’s easier to fleece at the institutional level with bulk negotiation, a model not too dissimilar to the textbook business pressuring professors on textbook adoption rather than trying to sell directly the end consumer–the student, which I’ve written about before.)
(Trigger alert: Apophasis to come) And none of this is to mention the quality control that is (or isn’t) put into the journals or papers themselves. Fortunately one need’t even go further than Bohannon’s other writings like Who’s Afraid of Peer Review? Then there are the hordes of articles on poor research design and misuse of statistical analysis and inability to repeat experiments. Not to give them any ideas, but lately it seems like Elsevier buying the Enquirer and charging $30 per article might not be a bad business decision. Maybe they just don’t want to play second-banana to TMZ?
Interestingly there’s a survey at the end of the article which indicates some additional sources of academic copyright infringement. I do have to wonder how the data for the survey will be used? There’s always the possibility that logged in users will be indicating they’re circumventing copyright and opening themselves up to litigation.
I also found the concept of using the massive data store as a means of applied corpus linguistics for science an entertaining proposition. This type of research could mean great things for science communication in general. I have heard of people attempting to do such meta-analysis to guide the purchase of potential intellectual property for patent trolling as well.
Finally, for those who haven’t done it (ever or recently), I’ll recommend that it’s certainly well worth their time and energy to attend one or more of the many 30-60 minute sessions most academic libraries offer at the beginning of their academic terms to train library users on research tools and methods. You’ll save yourself a huge amount of time.
wo years ago today, I officially began to (try to) own all of my own web data and host it on my own server.
It began when I moved from WordPress.com to my own domain at BoffoSocko.com. At the time, I wasn’t aware of the IndieWeb movement, but shortly thereafter I ran across IndieWebCamp.org and began using their principles and philosophy, which seemed to me to be how the Web and the Internet should have worked from the start.
Though I still use corporate-owned social media sites (primarily for increased distribution), I no longer rely on them for being the sole source of my internet presence or identity.
Now, through the boffosocko.com domain and a variety of tools, I post all of my content here on my own site first and then syndicate it out to Facebook, Twitter, Google+, LinkedIn, Tumblr, and any other useful sites. [Sadly, because of API restrictions I do still natively post to Instagram, but using OwnYourGram, I’m able to programmatically post the same photo on my site simultaneously.] This means that if any of these silos were to disappear, I would still own all of my own content (including comments I make on other sites, which sometimes could be blogposts/articles in and of themselves, or worse, through administrative interfaces could actually not be approved/published, and therefore completely lost as if I hadn’t written them to begin with.)
Also slowly, but surely, I’ve been able to have all of the resulting interactions that take place on my content on many of these silos (Facebook, Twitter, Google+) appear back on my site in the comments section on the original post. This way, if you’re commenting and interacting on this post on Facebook (for example) and you comment there, the comment is ported over to the comment section on my own site where it exists for everyone to see and interact with.
If you’re interested in joining the movement you can see if there’s a meeting in your neighborhood (or even create your own.) For those living in the Los Angeles area, there’s a meeting this week on Wednesday, April 27th! Click here for more details. Later this year, there’s also a bigger Indie Web Camp here in Los Angeles too!
If you think the mission and philosophy of the Indie Web are interesting and would like some help setting something like this up for yourself, I’m happy to help! Just post a comment below or reply to this post (depending on what platform you’re reading this.)
I also want to say a BIG THANK YOU to all those in the indieweb community who’ve helped me come much farther and faster than I would have done by myself!
I’m copying some useful introductory material from IndieWebCamp.org below for those interested:
What is the IndieWeb?
The IndieWeb is a people-focused alternative to the ‘corporate web’.
Join the IndieWeb
- Interested? Get Started Now!
- View current discussions and recent changes to this site to see what we’ve been working on lately
- Check out projects we’re building and join the discussion
Beyond Blogging and Decentralization
The IndieWeb effort is different from previous efforts/communities:
- Principles over project-centrism. Others assume a monoculture of one project for all. We are developing a plurality of projects.
- Selfdogfood instead of email. Show before tell. Prioritize by scratching your own itches, creating, iterating on your own site.
- Design first, protocols & formats second. Focus on good UX & selfdogfood prototypes to create minimum necessary formats & protocols.
Perhaps most importantly, we are people-focused instead of project-focused, and have regular meetups where everyone is welcome:
Homebrew Website Club
Homebrew Website Club is a (bi)weekly meetup of creatives passionate about designing, improving, building, and actively using their own websites, sharing their successes and challenges with a like-minded and supportive community. We have adopted a similar structure as the classic Homebrew Computer Club meetings. 
We typically meet every other Wednesday* right after work, 18:30-19:30, across cities and online. Some locations also have a 17:30-18:30 Quiet Writing Hour beforehand. Edinburgh is meeting every week, and some cities meet on Tuesdays!
This morning I ran across a tweet from colleague Andrew Eckford:
— Andrew Eckford (@andreweckford) April 12, 2016
His response was probably innocuous enough, but I thought the article should be put to task a bit more.
“35 million academics, independent scholars and graduate students as users, who collectively have uploaded some eight million texts”
35 million users is an okay number, but their engagement must be spectacularly bad if only 8 million texts are available. How many researchers do you know who’ve published only a quarter of an article anywhere, much less gotten tenure?
“the platform essentially bans access for academics who, for whatever reason, don’t have an Academia.edu account. It also shuts out non-academics.”
They must have changed this, as pretty much anyone with an email address (including non-academics) can create a free account and use the system. I’m fairly certain that the platform was always open to the public from the start, but the article doesn’t seem to question the statement at all. If we want to argue about shutting out non-academics or even academics in poorer countries, let’s instead take a look at “big publishing” and their $30+/paper paywalls and publishing models, shall we?
“I don’t trust academia.edu”
Given his following discussion, I can only imagine what he thinks of big publishers in academia and that debate.
“McGill’s Dr. Sterne calls it “the gamification of research,”
Most research is too expensive to really gamify in such a simple manner. Many researchers are publishing to either get or keep their jobs and don’t have much time, information, or knowledge to try to game their reach in these ways. And if anything, the institutionalization of “publish or perish” has already accomplished far more “gamification”, Academia.edu is just helping to increase the reach of the publication. Given that research shows that most published research isn’t even read, much less cited, how bad can Academia.edu really be? [Cross reference: Reframing What Academic Freedom Means in the Digital Age]
If we look at Twitter and the blogging world as an analogy with Academia.edu and researchers, Twitter had a huge ramp up starting in 2008 and helped bloggers obtain eyeballs/readers, but where is it now? Twitter, even with a reasonable business plan is stagnant with growing grumblings that it may be failing. I suspect that without significant changes that Academia.edu (which is a much smaller niche audience than Twitter) will also eventually fall by the wayside.
The article rails against not knowing what the business model is or what’s happening with the data. I suspect that the platform itself doesn’t have a very solid business plan and they don’t know what to do with the data themselves except tout the numbers. I’d suspect they’re trying to build “critical mass” so that they can cash out by selling to one of the big publishers like Elsevier, who might actually be able to use such data. But this presupposes that they’re generating enough data; my guess is that they’re not. And on that subject, from a journalistic viewpoint, where’s the comparison to the rest of the competition including ResearchGate.net or Mendeley.com, which in fact was purchased by Elsevier? As it stands, this simply looks like a “hit piece” on Academia.edu, and sadly not a very well researched or reasoned one.
In sum, the article sounds to me like a bunch of Luddites running around yelling “fire”, particularly when I’d imagine that most referred to in the piece feed into the more corporate side of publishing in major journals rather than publishing it themselves on their own websites. I’d further suspect they’re probably not even practicing academic samizdat. It feels to me like the author and some of those quoted aren’t actively participating in the social media space to be able to comment on it intelligently. If the paper wants to pick at the academy in this manner, why don’t they write an exposé on the fact that most academics still have websites that look like they’re from 1995 (if, in fact, they have anything beyond their University’s mandated business card placeholder) when there are a wealth of free and simple tools they could use? Let’s at least build a cart before we start whipping the horse.
For academics who really want to spend some time and thought on a potential solution to all of this, I’ll suggest that they start out by owning their own domain and own their own data and work. The #IndieWeb movement certainly has an interesting philosophy that’s a great start in fixing the problem; it can be found at http://www.indiewebcamp.com.
There are potential solutions to the recent News Genius-gate incident, and simple notifications can go a long way toward helping prevent online bullying behavior.
There has been a recent brouhaha on the Internet (see related stories below) because of bad actors using News Genius (and potentially other web-based annotation tools like Hypothes.is) to comment on websites without their owner’s knowledge, consent, or permission. It’s essentially the internet version of talking behind someone’s back, but doing it while standing on their head and shouting with your fingers in their ears. Because of platform and network effects, such rude and potentially inappropriate commentary can have much greater reach than even the initial website could give it. Naturally in polite society, such bullying behavior should be curtailed.
This type of behavior is also not too different from more subtle concepts like subtweets or the broader issues platforms like Twitter are facing in which they don’t have proper tools to prevent abuse and bullying online.
A creator receives no notification if someone has annotated their content.–Ella Dawson
Towards a Solution: Basic Awareness
I think that a major part of improving the issue of abuse and providing consent is building in notifications so that website owners will at least be aware that their site is being marked up, highlighted, annotated, and commented on in other locations or by other platforms. Then the site owner at least has the knowledge of what’s happening and can then be potentially provided with information and tools to allow/disallow such interactions, particularly if they can block individual bad actors, but still support positive additions, thought, and communication. Ideally this blocking wouldn’t occur site-wide, which many may be tempted to do now as a knee-jerk reaction to recent events, but would be fine grained enough to filter out the worst offenders.
Toward the end of notifications to site owners, it would be great if any annotating activity would trigger trackbacks, pingbacks, or the relatively newer and better webmention protocol of the W3C which comes out of the IndieWeb movement. Then site owners would at least have notifications about what is happening on their site that might otherwise be invisible to them. (And for the record, how awesome would it be if social media silos like Facebook, Twitter, Instagram, Google+, Medium, Tumblr, et al would support webmentions too!?!)
Perhaps there’s a way to further implement filters or tools (a la Akismet on platforms like WordPress) that allow site users to mark materials as spam, abusive, or “other” so that they are then potentially moved from “public” facing to “private” so that the original highlighter can still see their notes, but that the platform isn’t allowing the person’s own website to act as a platform to give safe harbor (or reach) to bad actors.
Further some site owners might appreciate gradable filters (G, PG, PG-13, R, X) so that either they or their users (or even parents of younger children) can filter what they’re willing to show on their site (or that their users can choose to see).
Consider also annotations on narrative forms that might be posted as spoilers–how can these be guarded against? For what happens when a even a well-meaning actor posts an annotation on page two which foreshadows that the butler did it thereby ruining the surprise on the last page? Certainly there’s some value in having such a comment from an academic/literary perspective, but it doesn’t mean that future readers will necessarily appreciate the spoiler. (Some CSS and a spoiler tag might easily and unobtrusively remedy the situation here?)
Certainly options can be built into the annotating platform itself as well as allowing server-side options for personal websites attempting to deal with flagrant violators and truly hard-to-eradicate cases.
Do you have a solution for helping to harden the Internet against bullies? Share it in the comments below.
- Genius Wants To Let Readers Annotate Any News Article. What Could Possibly Go Wrong? by Jessica Goldstein, ThinkProgress 2016-03-30
- Genius responds to Congresswoman Katherine Clark’s letter on preventing abuse by Noah Kulwin, Re/code 2016-03-29
- Misguided Genius by Chelsea Hassler, Slate 2016-03-28
- The Genius Problem by Chuq Von Rospach 2016-03-28
- Genius Web Annotator vs. One Young Woman With a Blog by Brady Dale, The Observer 2016-03-28