👓 The Internet is going the wrong way | Scripting News

Read The Internet is going the wrong way by Dave Winer (Scripting News)

Click a link in a web browser, it should open a web page, not try to open an app which you may not have installed. This is what Apple does with podcasts and now news.#

Facebook is taking the place of blogs, but doesn't permit linking, styles. Posts can't have titles or include podcasts. As a result these essential features are falling into disuse. We're returning to AOL. Linking, especially is essential.#

Google is forcing websites to change to support HTTPS. Sounds innocuous until you realize how many millions of historic domains won't make the switch. It's as if a library decided to burn all books written before 2000, say. The web has been used as an archival medium, it isn't up to a company to decide to change that, after the fact. #

Medium, a blogging site, is gradually closing itself off to the world. People used it for years as the place-of-record. I objected when I saw them do this, because it was easy to foresee Medium pivoting, and they will pivot again. The final pivot will be when they go off the air entirely, as commercial blogging systems eventually do.

A frequently raised warning, and one that’s possibly not taken seriously enough.

Syndicated copies to:

🎧 <A> | Adactio

Listened to by Jeremy KeithJeremy Keith from adactio.com

The opening keynote from the inaugural HTML Special held before CSS Day 2016 in Amsterdam.

The world exploded into a whirling network of kinships, where everything pointed to everything else, everything explained everything else.
— Umberto Eco, Foucault’s Pendulum

I wasn’t able to attend the original presentation, but I think it’s even more valuable to listen to it all alone rather than in what was assuredly a much larger crowd. There is a wonderful presence in this brief history of the internet, made all the more intriguing by Jeremy’s performance as if it were poetry about technology. I find that he’s even managed to give it an interesting structured format, which, in many senses, mirrors the web itself.

I hope that if you’re starting your adventure on the web, that you manage to find this as one of the first links that starts you off on your journey. It’s a great place to start.

Syndicated copies to:

👓 Google Condemns the Archival Web | Doc Searls

Read Google Condemns the Archival Web by Doc Searls (doc.blog)
The archival Web—the one you see through the protocol HTTP—will soon be condemned, cordoned off behind Google's police tape, labeled "insecure" on every current Chrome browser. For some perspective on this, imagine if suddenly all the national parks in the world became forbidden zones because nature created them before they could only be seen through crypto eyeglasses. Every legacy website, nearly all of which were created with no malice, commit no fraud and distribute no malware, will become haunted houses: still there, but too scary for most people to visit. It's easy to imagine, and Google wants you to imagine it.
Syndicated copies to:

👓 Federal Judge Says Embedding a Tweet Can Be Copyright Infringement | EFF

Read Federal Judge Says Embedding a Tweet Can Be Copyright Infringement (Electronic Frontier Foundation)
Rejecting years of settled precedent, a federal court in New York has ruled [PDF] that you could infringe copyright simply by embedding a tweet in a web page. Even worse, the logic of the ruling applies to all in-line linking, not just embedding tweets. If adopted by other courts, this legally and...

This is an insane bit of news and could have some chilling effects on all areas of the web.

Syndicated copies to:

Reply to Laying the Standards for a Blogging Renaissance by Aaron Davis

Replied to Laying the Standards for a Blogging Renaissance by Aaron Davis (Read Write Respond)
With the potential demise of social media, does this offer a possible rebirth of blogging communities and the standards they are built upon?

Aaron, some excellent thoughts and pointers.

A lot of your post also reminds me of Bryan Alexander’s relatively recent post I defy the world and to go back to RSS.

I completely get the concept of what you’re getting at with harkening back to the halcyon days of RSS. I certainly love, use, and rely on it heavily both for consumption as well as production. Of course there’s also still the competing standard of Atom still powering large parts of the web (including GNU Social networks like Mastodon). But almost no one looks back fondly on the feed format wars…

I think that while many are looking back on the “good old days” of the web, that we not forget the difficult and fraught history that has gotten us to where we are. We should learn from the mistakes made during the feed format wars and try to simplify things to not only move back, but to move forward at the same time.

Today, the easier pared-down standards that are better and simpler than either of these old and and difficult specs is simply adding Microformat classes to HTML (aka P.O.S.H) to create feeds. Unless one is relying on pre-existing infrastructure like WordPress, building and maintaining RSS feed infrastructure can be difficult at best, and updates almost never occur, particularly for specifications that support new social media related feeds including replies, likes, favorites, reposts, etc. The nice part is that if one knows how to write basic html, then one can create a simple feed by hand without having to learn the mark up or specifics of RSS. Most modern feed readers (except perhaps Feedly) support these new h-feeds as they’re known. Interestingly, some CMSes like WordPress support Microformats as part of their core functionality, though in WordPress’ case they only support a subsection of Microformats v1 instead of the more modern v2.

For those like you who are looking both backward and simultaneously forward there’s a nice chart of “Lost Infractructure” on the IndieWeb wiki which was created following a post by Anil Dash entitled The Lost Infrastructure of Social Media. Hopefully we can take back a lot of the ground the web has lost to social media and refashion it for a better and more flexible future. I’m not looking for just a “hipster-web”, but a new and demonstrably better web.

The Lost Infrastructure of the Web from the IndieWeb Wiki (CC0)

Some of the desire to go back to RSS is built into the problems we’re looking at with respect to algorithmic filtering of our streams (we’re looking at you Facebook.) While algorithms might help to filter out some of the cruft we’re not looking for, we’ve been ceding too much control to third parties like Facebook who have different motivations in presenting us material to read. I’d rather my feeds were closer to the model of fine dining rather than the junk food that the-McDonald’s-of-the-internet Facebook is providing. As I’m reading Cathy O’Neil’s book Weapons of Math Distraction, I’m also reminded that the black box that Facebook’s algorithm is is causing scale and visibility/transparency problems like the Russian ad buys which could have potentially heavily influenced the 2017 election in the United States. The fact that we can’t see or influence the algorithm is both painful and potentially destructive. If I could have access to tweaking a third-party transparent algorithm, I think it would provide me a lot more value.

As for OPML, it’s amazing what kind of power it has to help one find and subscribe to all sorts of content, particularly when it’s been hand curated and is continually self-dogfooded. One of my favorite tools are readers that allow one to subscribe to the OPML feeds of others, that way if a person adds new feeds to an interesting collection, the changes propagate to everyone following that feed. With this kind of simple technology those who are interested in curating things for particular topics (like the newsletter crowd) or even creating master feeds for class material in a planet-like fashion can easily do so. I can also see some worthwhile uses for this in journalism for newspapers and magazines. As an example, imagine if one could subscribe not only to 100 people writing about #edtech, but to only their bookmarked articles that have the tag edtech (thus filtering out their personal posts, or things not having to do with edtech). I don’t believe that Feedly supports subscribing to OPML (though it does support importing OPML files, which is subtly different), but other readers like Inoreader do.

I’m hoping to finish up some work on my own available OPML feeds to make subscribing to interesting curated content a bit easier within WordPress (over the built in, but now deprecated link manager functionality.) Since you mentioned it, I tried checking out the OPML file on your blog hoping for something interesting in the #edtech space. Alas… 😉 Perhaps something in the future?

Syndicated copies to:

👓 Why We Terminated Daily Stormer | Cloudflare

Read Why We Terminated Daily Stormer by Matthew Prince (Cloudflare)
Earlier today, Cloudflare terminated the account of the Daily Stormer. We've stopped proxying their traffic and stopped answering DNS requests for their sites. We've taken measures to ensure that they cannot sign up for Cloudflare's services ever again. Our terms of service reserve the right for us to terminate users of our network at our sole discretion. The tipping point for us making this decision was that the team behind Daily Stormer made the claim that we were secretly supporters of their ideology. Our team has been thorough and have had thoughtful discussions for years about what the right policy was on censoring. Like a lot of people, we’ve felt angry at these hateful people for a long time but we have followed the law and remained content neutral as a network. We could not remain neutral after these claims of secret support by Cloudflare. Now, having made that decision, let me explain why it's so dangerous.

Some interesting implications for how the internet works as a result of this piece.

Syndicated copies to:

👓 How to See What the Internet Knows About You (And How to Stop It) | New York Times

Read How to See What the Internet Knows About You (And How to Stop It) (New York Times)
Welcome to the second edition of the Smarter Living newsletter.
Syndicated copies to:

I’m apparently the king of the microformat rel=”me”

More important however is the reason why I hold the title!

Today, at the IndieWeb Summit 2017, Ryan Barrett, while giving a presentation on some data research he’s been doing on the larger Indieweb community, called me out for a ridiculous number of rel-me’s on a single page. His example cited me as having 177 of them on a single page! I tracked it down and it was actually an archive page that included the following post How many social media related accounts can one person have on the web?!.

What is a rel=”me”?

Rel=”me” is a microformat tag put on hyperlinks that indicates that the paged linked to is another representation of the person who controls the site/page you’re currently looking at. Thus on my home page the Facebook bug has a link to my facebook account which is another representation of me on the web, thus it has a rel=”me” tag on it.

His data is a bit old as I now maintain a page entitled Social Media Accounts and Links with some (but far from all) of my disparate and diverse social media accounts. That page currently has 190 rel=”me”s on it! While there was one other example that had rel-mes pointing to every other internal page on the site (at 221, if I recall), I’m proud to say, without gaming the system in such a quirky way, that each and every one of the rel=”me” URLs is indeed a full legitimate use of the tag.

I’m proud to be at the far end of the Zipf tail for this. And even more proud to be tagged as such during the week in which Microformats celebrates its 12th birthday. But for those doing research or who need edge cases of rel-me use, I’m also happy to serve as a unique test case. (If I’m not mistaken, I think my Google+ page broke one of Ryan’s web crawlers/tools in the past for a similar use-case a year or two ago).

The Moral of the Story

The take away from this seemingly crazy and obviously laughable example is simply just how fragmented one’s online identity can become by using social silos. Even more interesting for some is the number of sites on that page which either no longer have links or which are crossed out indicating that they no longer resolve. This means those sites and thousands more are now gone from the internet and along with them all of the data that they contained not only for me but thousands or even millions of other users.

This is one of the primary reasons that I’m a member of the Indieweb, have my own domain, and try to own all of my own data.

While it seemed embarrassing for a moment (yes, I could hear the laughter even in the live stream folks!), I’m glad Ryan drew attention to my rel-me edge case in part because it highlights some of the best reasons for being in the Indieweb.

(And by the way Ryan, thanks for a great presentation! I hope everyone watches the full video and checks out the new site/tool!)

Syndicated copies to:

Live Q&A: ownCloud contributors create Nextcloud

Watched Live Q&A: ownCloud contributors create Nextcloud from YouTube
Ask questions in a live Nextcloud Q&A Hangout with Frank Karlitschek and Jos Poortvliet, moderated by Bryan Lunduke at 18:00 PM Berlin/Amsterdam/Paris time, 10:00 AM Pacific time on June 2nd, 2016.

Nextcloud Q&A Hangout with Frank Karlitschek and Jos Poortvliet, moderated by Bryan Lunduke at 18:00 PM Berlin/Amsterdam/Paris time, 10:00 AM Pacific time on June 2nd, 2016.

I invented the web. Here are three things we need to change to save it | Tim Berners-Lee | Technology | The Guardian

Read I invented the web. Here are three things we need to change to save it (the Guardian)
It has taken all of us to build the web we have, and now it is up to all of us to build the web we want – for everyone

Continue reading “I invented the web. Here are three things we need to change to save it | Tim Berners-Lee | Technology | The Guardian”

Syndicated copies to:

A brief analogy of food culture and the internet

food:McDonalds:obesity :: internet:Facebook:intellectual laziness

tantek [10:07 AM]
I made a minor cassis.js auto_link bug fix that is unlikely to affect folks (involves a parameter to explicitly turn off embeds)
(revealed by my own posting UI, so selfdogfooding FTW)
selfdogfood++

tantek [10:10 AM]
/me realizes his upcoming events on his home page are out of date, again. manual hurts.

Tantek’s thoughts and the reference to selfdogfooding, while I’m thinking about food, makes me think that there’s kind of an analogy between food and people who choose to eat at restaurants versus those who cook at home and websites/content on the internet.

The IndieWeb is made of people who are “cooking” their websites at home. In some sense I hope we’re happier, healthier, and better/smarter communicators as a result, but it also makes me think about people who can’t afford to eat or afford internet access.

Are silos the equivalent of fast food? Are too many people consuming content that isn’t good for them and becoming intellectually obese? Would there be more thought and intention if there were more home chefs making and consuming content in smaller batches? Would it be more nutritious and mentally valuable?

I think there’s some value hiding in extending this comparison.

Syndicated copies to:

The Web Cryptography API is a W3C Recommendation | W3C News

Bookmarked The Web Cryptography API is a W3C Recommendation (W3C News)
The Web Cryptography Working Group has published a W3C Recommendation of the Web Cryptography API. This specification describes a JavaScript API for performing basic cryptographic operations in web applications, such as hashing, signature generation and verification, and encryption and decryption. Additionally, it describes an API for applications to generate and/or manage the keying material necessary to perform these operations. Uses for this API range from user or service authentication, document or code signing, and the confidentiality and integrity of communications.

h/t

Stop Publishing Web Pages | Anil Dash

Read Stop Publishing Web Pages (anildash.com)
Most users on the web spend most of their time in apps. The most popular of those apps, like Facebook, Twitter, Gmail, Tumblr and others, are primarily focused on a single, simple stream that offers a river of news which users can easily scroll through, skim over, and click on to read in more depth. Most media companies on the web spend all of their effort putting content into content management systems which publish pages. These pages work essentially the same way that pages have worked since the beginning of the web, with a single article or post living at...

Continue reading “Stop Publishing Web Pages | Anil Dash”

Free Web Development & Performance Ebooks

Bookmarked Free Web Development & Performance Ebooks (oreilly.com)
The Web grows every day. Tools, approaches, and styles change constantly, and keeping up is a challenge. We've compiled the best insights from subject matter experts for you in one place, so you can dive deep into the latest of what's happening in web development.

Chris Aldrich is reading “Maybe the Internet Isn’t a Fantastic Tool for Democracy After All”

Read Maybe the Internet Isn’t a Fantastic Tool for Democracy After All by Max Read (Select All)
Fake news is the easiest of the problems to fix.

…a new set of ways to report and share news could arise: a social network where the sources of articles were highlighted rather than the users sharing them. A platform that makes it easier to read a full story than to share one unread. A news feed that provides alternative sources and analysis beneath every shared article.

This sounds like the kind of platforms I’d like to have. Reminiscent of some of the discussion at the beginning of This Week in Google: episode 379 Ixnay on the Eet-tway.

I suspect that some of the recent coverage of “fake news” and how it’s being shared on social media has prompted me to begin using Reading.am, a bookmarking-esqe service that commands that users to:

Share what you’re reading. Not what you like. Not what you find interesting. Just what you’re reading.

Naturally, in IndieWeb fashion, I’m also posting these read articles to my site. While bookmarks are things that I would implicitly like to read in the near future (rather than “Christmas ornaments” I want to impress people with on my “social media Christmas tree”), there’s a big difference between them and things that I’ve actually read through and thought about.

I always feel like many of my family, friends, and the general public click “like” or “share” on articles in social media without actually having read them from top to bottom. Research would generally suggest that I’m not wrong. [1] [2] Some argue that the research needs to be more subtle too. [3] I generally refuse to participate in this type of behavior if I can avoid it.

Some portion of what I physically read isn’t shared, but at least those things marked as “read” here on my site are things that I’ve actually gone through the trouble to read from start to finish. When I can, I try to post a few highlights I found interesting along with any notes/marginalia (lately I’m loving the service Hypothes.is for doing this) on the piece to give some indication of its interest. I’ll also often try to post some of my thoughts on it, as I’m doing here.

Gauging Intent of Social Signals

I feel compelled to mention here that on some platforms like Twitter, that I don’t generally use the “like” functionality there to indicate that I’ve actually liked a tweet itself or any content that’s linked to in it. In fact, I’ve often not read anything related to the tweet but the simple headline presented in the tweet itself.

The majority of the time I’m liking/favoriting something on Twitter, it’s because I’m using an IFTTT.com applet which takes the tweets I “like” and saves them to my Pocket account where I come back to them later to read. It’s not the case that I actually read everything in my pocket queue, but those that I do read will generally appear on my site.

There are however, some extreme cases in which pieces of content are a bit beyond the pale for indicating a like on, and in those cases I won’t do so, but will manually add them to my reading queue. For some this may create some grey area about my intent when viewing things like my Twitter likes. Generally I’d recommend people view that feed as a generic linkblog of sorts. On Twitter, I far more preferred the nebulous star indicator over the current heart for indicating how I used and continue to use that bit of functionality.

I’ll also mention that I sometimes use the like/favorite functionality on some platforms to indicate to respondents that I’ve seen their post/reply. This type of usage could also be viewed as a digital “Thank You”, “hello”, or even “read receipt” of sorts since I know that the “like” intent is pushed into their notifications feed. I suspect that most recipients receive these intents as I intend them though the Twitter platform isn’t designed for this specifically.

I wish that there was a better way for platforms and their readers to better know exactly what the intent of the users’ was rather than trying to intuit them. It would be great if Twitter had the ability to allow users multiple options under each tweet to better indicate whether their intent was to bookmark, like, or favorite it, or to indicate that they actually read/watched the content on the other end of the link in the tweet.

In true IndieWeb fashion, because I can put these posts on my own site, I can directly control not only what I post, but I can be far more clear about why I’m posting it and give a better idea about what it means to me. I can also provide footnotes to allow readers to better see my underlying sources and judge for themselves their authenticity and actual gravitas. As a result, hopefully you’ll find no fake news here.

Of course some of the ensuing question is: “How does one scale this type of behaviour up?”

References

[1]
M. Gabielkov, A. Ramachandran, A. Chaintreau, and A. Legout, “Social Clicks: What and Who Gets Read on Twitter?,” SIGMETRICS Perform. Eval. Rev., vol. 44, no. 1, pp. 179–192, Jun. 2016 [Online]. Available: http://doi.acm.org/10.1145/2964791.2901462
[2]
C. Dewey, “6 in 10 of you will share this link without reading it, a new, depressing study says,” Washington Post, 16-Jun-2016. [Online]. Available: https://www.washingtonpost.com/news/the-intersect/wp/2016/06/16/six-in-10-of-you-will-share-this-link-without-reading-it-according-to-a-new-and-depressing-study/. [Accessed: 06-Dec-2016]
[3]
T. Cigelske  , “Why It’s OK to Share This Story Without Reading It ,” MediaShift, 24-Jun-2016. [Online]. Available: http://mediashift.org/2016/06/why-its-ok-to-share-this-story-without-reading-it/. [Accessed: 06-Dec-2016]
Syndicated copies to: