👓 The Future of Publishing | LitFest Pasadena

RSVPed Unable to Attend The Future of Publishing
The Future of Publishing
May 19 @ 3:00 pm - 4:00 pm

Six small presses with a wide range of specialties—fiction, children’s books, literature in translation, poetry, cookbooks—talk about the challenges and opportunities in book publishing in the near future, and how they’re looking to innovate and look beyond the corporate Big Five publishing model.

Featured Guests: Neela Banerjee, Kaya Press; Ariana Stein, Lil Libros; Ross Ufberg, New Vessel; Tobi Harper, Red Hen Press; Julia Callahan, Rare Bird Books; Colleen Dunn Bates, Prospect Park – Moderator

Wishing I hadn’t gotten myself committed on Saturday to go to Knott’s Berry Farm so I could attend this in the afternoon.

Syndicated copies to:

👓 After 5 years and $3M, here’s everything we’ve learned from building Ghost | Ghost

Read After 5 years and $3M, here's everything we've learned from building Ghost by John O'Nolan, Hannah Wolfe (Ghost)
It's always fun to use these milestones to take a step back and reflect on the journey so far. On previous birthdays I've talked about revenue milestones and product updates, but this year I'm going to focus more on all the things we've learned since we started.

In reading this, I took a look at downloading and self-hosting a copy of Ghost for myself, but the barrier and work involved was beyond my patience to bother with. For an open source project that prides itself on user experience, this seemed at odds. Perhaps this is playing itself out better for the paid monthly customers? But in this case, it doesn’t support many of the pieces of infrastructure I find de rigueur now: Webmention support and microformats which I understand they have no plans to support anytime soon.

Looking at their project pages and site though it does seem like they’ve got a reasonable layout and sales pitch for a CMS project, though it’s probably a bit too much overkill on selling when it could be simpler. Perhaps it might be a model for creating a stronger community facing page for the WithKnown open source project, presuming the education-focused corporate side continues as a status quo?

They did seem to be relatively straightforward in selling themselves against WordPress and what they were able to do and not do. I’m curious what specifically they’re doing to attract journalists? I couldn’t find anything specifically better than anything else on the market that would set it apart other than their promise on ease-of-use.

There were some interesting insights for those working within the IndieWeb community as well as businesses which might build themselves upon it.

Highlights:

Decentralised platforms fundamentally cannot compete on ease of setup. Nothing beats the UX of signing up for a centralised application.

We spent a very long time trying to compete on convenience and simplicity. This was our biggest mistake and the hardest lesson to learn.

Syndicated copies to:

👓 This Is How a Newspaper Dies | Politico

Read This Is How a Newspaper Dies by Jack Shafer (POLITICO Magazine)
It’s with a spasm of profits.

This article outlines an intriguing method for plundering the carcass of a dying business to reap as much profit from it as it dies as one can. I suppose that if one is sure a segment is on its way out, one may as well exploit its customers to turn a profit.

I wonder how long it will take for traditional television and cable related businesses to begin using this model as more and more people cut the cord.

Syndicated copies to:

I just submitted a workshop/presentation proposal to WordCamp for Publishers: Chicago (Aug 8-10) on the topic of applying IndieWeb principles and new W3C recommended open web standards to publishing. I’m particularly excited because their theme is “Taking Back The Open Web”!

Fingers crossed!

Call for Speakers

Syndicated copies to:

👓 Save Barnes & Noble! | New York Times

Read Opinion | Save Barnes & Noble! by David LeonhardtDavid Leonhardt (nytimes.com)
It’s in trouble. And Washington’s flawed antitrust policy is a big reason.

There are some squirrel-ly things that Amazon is managing to get away with, and they’re not all necessarily good.

Syndicated copies to:

❤️ aschweig tweet about WordCamp for Publishers

Liked a tweet by Adam SchweigertAdam Schweigert (Twitter)
Syndicated copies to:

On the topic of RSS audio feeds for The Gillmor Gang

Some suggestions for extracting audio only podcast-friendly feeds for one of my favorite shows.

I’ll start off with the fact that I’m a big fan of The Gillmore Gang and recommend it to anyone who is interested in the very bleeding edge of the overlap of technology and media. I’ve been listening almost since the beginning, and feel that digging back into their archives is a fantastic learning experience even for the well-informed. Most older episodes stand up well to the test of time.

The Problem

In the Doc Soup episode of The Gillmor Gang on 5/13/17–right at the very end–Steve Gillmor reiterated, “This isn’t a podcast. This was a podcast. It will always be a podcast, but streaming is where it’s at, and that’s what we’re doing right now.” As such, apparently Tech Crunch (or Steve for that matter) doesn’t think it’s worthwhile to have any sort of subscribe-able feed for those who prefer to listen to a time shifted version of the show. (Ironically in nearly every other episode they talk about the brilliance of the Apple TV, which is–guess what?–a highly dedicated time shifting viewing/listening device.) I suppose that their use of an old, but modified TV test pattern hiding in the og:image metadata on their webpages is all-too-apropos.

It’s been several years (around the time of the Leo Incident?) since The Gillmor Gang has reliably published an audio version, a fact I find painful and frustrating as I’m sure many others do as well. At least once or twice a year, I spend an hour or so searching around to find one, generally to no avail. While watching it live and participating in the live chat may be nice, I typically can’t manage the time slot, so I’m stuck trying to find time to watch the video versions on Tech Crunch. Sadly, looking at four or more old, wrinkly, white men (Steve himself has cautioned, “cover your eyes, it’ll be okay…” without admitting it could certainly use some diversity) for an hour or more isn’t my bailiwick. Having video as the primary modality for this show is rarely useful. To me, it’s the ideas within the discussion which are worthwhile, so I only need a much lower bandwidth .mp3 audio file to be able to listen. And so sadly, the one thing this over-technologized show (thanks again TriCaster!) actually needs from a production perspective is a simple .mp3 (RSS, Atom, JSON feed, or h-feed) podcast feed!

Solutions

In recent batches of searching, I have come across a few useful resources for those who want simple, sweet audio out of the show, so I’m going to document them here.

First, some benevolent soul has been archiving audio copies of the show to The Internet Archive for a while. They can be found here (sorted by upload date): https://archive.org/search.php?query=subject%3A%22Gillmor+Gang%22&sort=-publicdate

In addition to this, one might also use other search methods, but this should give one most of the needed weekly content. Sadly IA doesn’t provide a useful feed out…

To create a feed quickly, one can create a free Huffduffer account. (This is one of my favorite tools in the world by the way.) They’ve got a useful bookmarklet tool that allows you to visit pages and save audio files and metadata about them to your account. Further, they provide multiple immediate means of subscribing to your saves as feeds! Thus you can pick and choose which Gillmor Gang episodes (or any other audio files on the web for that matter) you’d like to put into your feed. Then subscribe in your favorite podcatcher and go.

For those who’d like to skip a step, Huffduffer also provides iTunes and a variety of other podcatcher specific feeds for content aggregated in other people’s accounts or even via tags on the service. (You can subscribe to what your friends are listening to!) Thus you can search for Gillmor Gang and BOOM! There are quick and easy links right there in the sidebar for you to subscribe to your heart’s content! (Caveat: you might have to filter out a few duplicates or some unrelated content, but this is the small price you’ll pay for huge convenience.)

My last potential suggestion might be useful to some, but is (currently) so time-delayed it’s likely not as useful. For a while, I’ve been making “Listen” posts to my website of things I listen to around the web. I’ve discovered that the way I do it, which involves transcluding the original audio files so the original host sees and gets the traffic, provides a subscribe-able faux-cast of content. You can use this RSS feed to capture the episodes I’ve been listening to lately. Note that I’m way behind right now and don’t always listen to episodes in chronological order, so it’s not as reliable a method for the more avid fan. Of course now that I’ve got some reasonable solutions… I’ll likely catch up quickly and we’re off to the races again.

Naturally none of this chicanery would be necessary if the group of producers and editors of the show would take five minutes to create and host their own version. Apparently they have the freedom and flexibility to not have to worry about clicks and advertising (which I completely appreciate, by the way) to need to capture the other half of the audience they’re surely missing by not offering an easy-to-find audio feed. But I’m dead certain they’ve got the time, ability, and resources to easily do this, which makes it painful to see that they don’t. Perhaps one day they will, but I wouldn’t bet the house on it.

I’ve made requests and been holding my breath for years, but the best I’ve done so far is to turn blue and fall off my chair.

Syndicated copies to:

👓 The Scientific Paper Is Obsolete | The Atlantic

Read The Scientific Paper Is Obsolete by James Somers (The Atlantic)
The scientific paper—the actual form of it—was one of the enabling inventions of modernity. Before it was developed in the 1600s, results were communicated privately in letters, ephemerally in lectures, or all at once in books. There was no public forum for incremental advances. By making room for reports of single experiments or minor technical advances, journals made the chaos of science accretive. Scientists from that point forward became like the social insects: They made their progress steadily, as a buzzing mass.

The earliest papers were in some ways more readable than papers are today. They were less specialized, more direct, shorter, and far less formal. Calculus had only just been invented. Entire data sets could fit in a table on a single page. What little “computation” contributed to the results was done by hand and could be verified in the same way.

Not quite the cutting edge stuff I would have liked, but generally an interesting overview of relatively new technology and UI set ups like Mathematica and Jupyter.

Syndicated copies to:

🔖 PaperBadger

Bookmarked PaperBadger by Mozilla Science (GitHub)
Issuing badges to credit authors for their work on academic papers https://badges.mozillascience.org/

Exploring the use of digital badges for crediting contributors to scholarly papers for their work

As the research environment becomes more digital, we want to test how we can use this medium to help bring transparency and credit for individuals in the publication process.

This work is a collaboration with publishers BioMed Central (BMC), Ubiquity Press (UP) and the Public Library of Science (PLoS); the biomedical research foundation, The Wellcome Trust; the software and technology firm Digital Science; the registry of unique researcher identifiers, ORCID; and the Mozilla Science Lab.

h/t to Greg McVerry via https://chat.indieweb.org/dev/2018-04-04#t1522869725219200

👓 Introducing Subscribe with Google | Google

Read Introducing Subscribe with Google by Jim Albrecht (www.blog.google)
Making digital subscriptions simple by making it easier to subscribe and enjoy premium content

Interesting to see this roll out as Facebook is having some serious data collection problems. This looks a bit like a means for Google to directly link users with content they’re consuming online and then leveraging it much the same way that Facebook was with apps and companies like Cambridge Analytica.

Highlights, Quotes, & Marginalia

Paying for a subscription is a clear indication that you value and trust your subscribed publication as a source. So we’ll also highlight those sources across Google surfaces


So Subscribe with Google will also allow you to link subscriptions purchased directly from publishers to your Google account—with the same benefits of easier and more persistent access.


you can then use “Sign In with Google” to access the publisher’s products, but Google does the billing, keeps your payment method secure, and makes it easy for you to manage your subscriptions all in one place.

I immediately wonder who owns my related subscription data? Is the publisher only seeing me as a lumped Google proxy or do they get may name, email address, credit card information, and other details?

How will publishers be able (or not) to contact me? What effect will this have on potential customer retention?

Syndicated copies to:

Exactly five years ago to the day I was excited about the possibilities of Digg Reader:

Now they’ve announced they’re shutting down. It seems to me that from a UI perspective, they only put in a bare minimal amount of effort to build out their reader and ceased iterating it on the day it it opened.

This is the second reader shut down recently, but I’m more excited about the idea of Microsub and what it may mean to the future of feed readers.

Syndicated copies to:

👓 Open web annotation of audio and video | Jon Udell

Read Open web annotation of audio and video by Jon UdellJon Udell (Jon Udell)
Text, as the Hypothesis annotation client understands it, is HTML, or PDF transformed to HTML. In either case, it’s what you read in a browser, and what you select when you make an annotation. What’s the equivalent for audio and video? It’s complicated because although browsers enable us to select passages of text, the standard media players built into browsers don’t enable us to select segments of audio and video. It’s trivial to isolate a quote in a written document. Click to set your cursor to the beginning, then sweep to the end. Now annotation can happen. The browser fires a selection event; the annotation client springs into action; the user attaches stuff to the selection; the annotation server saves that stuff; the annotation client later recalls it and anchors it to the selection. But selection in audio and video isn’t like selection in text. Nor is it like selection in images, which we easily and naturally crop. Selection of audio and video happens in the temporal domain. If you’ve ever edited audio or video you’ll appreciate what that means. Setting a cursor and sweeping a selection isn’t enough. You can’t know that you got the right intro and outro by looking at the selection. You have to play the selection to make sure it captures what you intended. And since it probably isn’t exactly right, you’ll need to make adjustments that you’ll then want to check, ideally without replaying the whole clip.

Jon Udell has been playing around with media fragments to create some new functionality in Hypothes.is. The nice part is that he’s created an awesome little web service for quickly and easily editing media fragments online for audio and video (including YouTube videos) which he’s also open sourced on GitHub.

I suspect that media fragments experimenters like Aaron Parecki, Marty McGuire, Kevin Marks, and Tantek Çelik will appreciate what he’s doing and will want to play as well as possibly extend it. I’ve already added some of the outline to the IndieWeb wiki page for media fragments (and a link to fragmentions) which has some of their prior work.

I too look forward to a day where web browsers have some of this standardized and built in as core functionality.

Highlights, Quotes, & Marginalia

Open web annotation of audio and video

This selection tool has nothing intrinsically to do with annotation. It’s job is to make your job easier when you are constructing a link to an audio or video segment.

I’m reminded of a JavaScript tool written by Aaron Parecki that automatically adds a start fragment to the URL of his page when the audio on the page is paused. He’s documented it here: https://indieweb.org/media_fragment


(If I were Virginia Eubanks I might want to capture the pull quote myself, and display it on my book page for visitors who aren’t seeing it through the Hypothesis lens.)

Of course, how would she know that the annotation exists? Here’s another example of where adding webmentions to Hypothesis for notifications could be useful, particularly when they’re more widely supported. I’ve outlined some of the details here in the past: http://boffosocko.com/2016/04/07/webmentions-for-improving-annotation-and-preventing-bullying-on-the-web/

Syndicated copies to:

Organizing my research related reading

There’s so much great material out there to read and not nearly enough time. The question becomes: “How to best organize it all, so you can read even more?”

I just came across a tweet from Michael Nielsen about the topic, which is far deeper than even a few tweets could do justice to, so I thought I’d sketch out a few basic ideas about how I’ve been approaching it over the last decade or so. Ideally I’d like to circle back around to this and better document more of the individual aspects or maybe even make a short video, but for now this will hopefully suffice to add to the conversation Michael has started.

Keep in mind that this is an evolving system which I still haven’t completely perfected (and may never), but to a great extent it works relatively well and I still easily have the ability to modify and improve it.

Overall Structure

The first piece of the overarching puzzle is to have a general structure for finding, collecting, triaging, and then processing all of the data. I’ve essentially built a simple funnel system for collecting all the basic data in the quickest manner possible. With the basics down, I can later skim through various portions to pick out the things I think are the most valuable and move them along to the next step. Ultimately I end up reading the best pieces on which I make copious notes and highlights. I’m still slowly trying to perfect the system for best keeping all this additional data as well.

Since I’ve seen so many apps and websites come and go over the years and lost lots of data to them, I far prefer to use my own personal website for doing a lot of the basic collection, particularly for online material. Toward this end, I use a variety of web services, RSS feeds, and bookmarklets to quickly accumulate the important pieces into my personal website which I use like a modern day commonplace book.

Collecting

In general, I’ve been using the Inoreader feed reader to track a large variety of RSS feeds from various clearinghouse sources (including things like ProQuest custom searches) down to individual researcher’s blogs as a means of quickly pulling in large amounts of research material. It’s one of the more flexible readers out there with a huge number of useful features including the ability to subscribe to OPML files, which many readers don’t support.

As a simple example arXiv.org has an RSS feed for the topic of “information theory” at http://arxiv.org/rss/math.IT which I subscribe to. I can quickly browse through the feed and based on titles and/or abstracts, I can quickly “star” the items I find most interesting within the reader. I have a custom recipe set up for the IFTTT.com service that pulls in all these starred articles and creates new posts for them on my WordPress blog. To these posts I can add a variety of metadata including top level categories and lower level tags in addition to other additional metadata I’m interested in.

I also have similar incoming funnel entry points via many other web services as well. So on platforms like Twitter, I also have similar workflows that allow me to use services like IFTTT.com or Zapier to push the URLs easily to my website. I can quickly “like” a tweet and a background process will suck that tweet and any URLs within it into my system for future processing. This type of workflow extends to a variety of sites where I might consume potential material I want to read and process. (Think academic social services like Mendeley, Academia.com, Diigo, or even less academic ones like Twitter, LinkedIn, etc.) Many of these services often have storage ability and also have simple browser bookmarklets that allow me to add material to them. So with a quick click, it’s saved to the service and then automatically ported into my website almost without friction.

My WordPress-based site uses the Post Kinds Plugin which takes incoming website URLs and does a very solid job of parsing those pages to extract much of the primary metadata I’d like to have without requiring a lot of work. For well structured web pages, it’ll pull in the page title, authors, date published, date updated, synopsis of the page, categories and tags, and other bits of data automatically. All these fields are also editable and searchable. Further, the plugin allows me to configure simple browser bookmarklets so that with a simple click on a web page, I can pull its URL and associated metadata into my website almost instantaneously. I can then add a note or two about what made me interested in the piece and save it for later.

Note here, that I’m usually more interested in saving material for later as quickly as I possibly can. In this part of the process, I’m rarely ever interested in reading anything immediately. I’m most interested in finding it, collecting it for later, and moving on to the next thing. This is also highly useful for things I find during my busy day that I can’t immediately find time for at the moment.

As an example, here’s a book I’ve bookmarked to read simply by clicking “like” on a tweet I cam across late last year. You’ll notice at the bottom of the post, I’ve optionally syndicated copies of the post to other platforms to “spread the wealth” as it were. Perhaps others following me via other means may see it and find it useful as well?

Triaging

At regular intervals during the week I’ll sit down for an hour or two to triage all the papers and material I’ve been sucking into my website. This typically involves reading through lots of abstracts in a bit more detail to better figure out what I want to read now and what I’d like to read at a later date. I can delete out the irrelevant material if I choose, or I can add follow up dates to custom fields for later reminders.

Slowly but surely I’m funneling down a tremendous amount of potential material into a smaller, more manageable amount that I’m truly interested in reading on a more in-depth basis.

Document storage

Calibre with GoodReads sync

Even for things I’ve winnowed down, there is still a relatively large amount of material, much of it I’ll want to save and personally archive. For a lot of this function I rely on the free multi-platform desktop application Calibre. It’s essentially an iTunes-like interface, but it’s built specifically for e-books and other documents.

Within it I maintain a small handful of libraries. One for personal e-books, one for research related textbooks/e-books, and another for journal articles. It has a very solid interface and is extremely flexible in terms of configuration and customization. You can create a large number of custom libraries and create your own searchable and sort-able fields with a huge variety of metadata. It often does a reasonable job of importing e-books, .pdf files, and other digital media and parsing out their meta data which prevents one from needing to do some of that work manually. With some well maintained metadata, one can very quickly search and sort a huge amount of documents as well as quickly prioritize them for action. Additionally, the system does a pretty solid job of converting files from one format to another, so that things like converting an .epub file into a .mobi format for Kindle are automatic.

Calibre stores the physical documents either in local computer storage, or even better, in the cloud using any of a variety of services including Dropbox, OneDrive, etc. so that one can keep one’s documents in the cloud and view them from a variety of locations (home, work, travel, tablet, etc.)

I’ve been a very heavy user of GoodReads.com for years to bookmark and organize my physical and e-book library and anti-libraries. Calibre has an exceptional plugin for GoodReads that syncs data across the two. This (and a few other plugins) are exceptionally good at pulling in missing metadata to minimize the amount that must be done via hand, which can be tedious.

Within Calibre I can manage my physical books, e-books, journal articles, and a huge variety of other document related forms and formats. I can also use it to further triage and order the things I intend to read and order them to the nth degree. My current Calibre libraries have over 10,000 documents in them including over 2,500 textbooks as well as records of most of my 1,000+ physical books. Calibre can also be used to add document data that one would like to ultimately acquire the actual documents, but currently don’t have access to.

BibTeX and reference management

In addition to everything else Calibre also has some well customized pieces for dovetailing all its metadata as a reference management system. It’ll allow one to export data in a variety of formats for document publishing and reference management including BibTex formats amongst many others.

Reading, Annotations, Highlights

Once I’ve winnowed down the material I’m interested in it’s time to start actually reading. I’ll often use Calibre to directly send my documents to my Kindle or other e-reading device, but one can also read them on one’s desktop with a variety of readers, or even from within Calibre itself. With a click or two, I can automatically email documents to my Kindle and Calibre will also auto-format them appropriately before doing so.

Typically I’ll send them to my Kindle which allows me a variety of easy methods for adding highlights and marginalia. Sometimes I’ll read .pdf files via desktop and use Adobe to add highlights and marginalia as well. When I’m done with a .pdf file, I’ll just resave it (with all the additions) back into my Calibre library.

Exporting highlights/marginalia to my website

For Kindle related documents, once I’m finished, I’ll use direct text file export or tools like clippings.io to export my highlights and marginalia for a particular text into simple HTML and import it into my website system along with all my other data. I’ve briefly written about some of this before, though I ought to better document it. All of this then becomes very easily searchable and sort-able for future potential use as well.

Here’s an example of some public notes, highlights, and other marginalia I’ve posted in the past.

Synthesis

Eventually, over time, I’ve built up a huge amount of research related data in my personal online commonplace book that is highly searchable and sortable! I also have the option to make these posts and pages public, private, or even password protected. I can create accounts on my site for collaborators to use and view private material that isn’t publicly available. I can also share posts via social media and use standards like webmention and tools like brid.gy so that comments and interactions with these pieces on platforms like Facebook, Twitter, Google+, and others is imported back to the relevant portions of my site as comments. (I’m doing it with this post, so feel free to try it out yourself by commenting on one of the syndicated copies.)

Now when I’m ready to begin writing something about what I’ve read, I’ve got all the relevant pieces, notes, and metadata in one centralized location on my website. Synthesis becomes much easier. I can even have open drafts of things as I’m reading and begin laying things out there directly if I choose. Because it’s all stored online, it’s imminently available from almost anywhere I can connect to the web. As an example, I used a few portions of this workflow to actually write this post.

Continued work

Naturally, not all of this is static and it continues to improve and evolve over time. In particular, I’m doing continued work on my personal website so that I’m able to own as much of the workflow and data there. Ideally I’d love to have all of the Calibre related piece on my website as well.

Earlier this week I even had conversations about creating new post types on my website related to things that I want to read to potentially better display and document them explicitly. When I can I try to document some of these pieces either here on my own website or on various places on the IndieWeb wiki. In fact, the IndieWeb for Education page might be a good place to start browsing for those interested.

One of the added benefits of having a lot of this data on my own website is that it not only serves as my research/data platform, but it also has the traditional ability to serve as a publishing and distribution platform!

Currently, I’m doing most of my research related work in private or draft form on the back end of my website, so it’s not always publicly available, though I often think I should make more of it public for the value of the aggregation nature it has as well as the benefit it might provide to improving scientific communication. Just think, if you were interested in some of the obscure topics I am and you could have a pre-curated RSS feed of all the things I’ve filtered through piped into your own system… now multiply this across hundreds of thousands of other scientists? Michael Nielsen posts some useful things to his Twitter feed and his website, but what I wouldn’t give to see far more of who and what he’s following, bookmarking, and actually reading? While many might find these minutiae tedious, I guarantee that people in his associated fields would find some serious value in it.

I’ve tried hundreds of other apps and tools over the years, but more often than not, they only cover a small fraction of the necessary moving pieces within a much larger moving apparatus that a working researcher and writer requires. This often means that one is often using dozens of specialized tools upon which there’s a huge duplication of data efforts. It also presumes these tools will be around for more than a few years and allow easy import/export of one’s hard fought for data and time invested in using them.

If you’re aware of something interesting in this space that might be useful, I’m happy to take a look at it. Even if I might not use the service itself, perhaps it’s got a piece of functionality that I can recreate into my own site and workflow somehow?

If you’d like help in building and fleshing out a system similar to the one I’ve outlined above, I’m happy to help do that too.

Related posts

Syndicated copies to:

👓 Project Gutenberg blocks German users after court rules in favor of Holtzbrinck subsidiary | TeleRead

Read Project Gutenberg blocks German users after court rules in favor of Holtzbrinck subsidiary by Chris Meadows (TeleRead)
The global Internet and highly territorial real world have had a number of collisions, especially where ebook rights are concerned. The most recent such dispute involves Project Gutenberg, a well-respected public domain ebook provider—in fact, the oldest. It concerns 18 German-language books by three German authors. As a result of a German lawsuit, Project Gutenberg has blocked Germany from viewing the Gutenberg web site. The books in question are out of copyright in the United States, because at the time they passed into the public domain US copyrights were based on the period after publication rather than the author’s life. The three authors involved are Heinrich Mann (died in 1950), Thomas Mann (1955) and Alfred Döblin (1957).

Some interesting thoughts on cross border intellectual property and copyright. Even if a site blocks the content, there are easy enough means of getting around it that local jurisdictions would need to enforce things locally anyway. Why bother with the intermediate step?

Syndicated copies to:

👓 Book clinic: why do publishers still issue hardbacks? | The Guardian

Read Book clinic: why do publishers still issue hardbacks? by Philip Jones (the Guardian)
The editor of the Bookseller explains why the hardback format will be with us for a while yet

An interesting example of “signaling” value in the publishing industry. Curious how this might play out in a longer study of the evolution of books and written material?

Syndicated copies to: