👓 Newsletter: IndieWebCamp | Micro Monday

Read Newsletter: IndieWebCamp (monday.micro.blog)
IndieWebCamp Austin will be February 22-23, 2020. Register now for just $10 for the weekend: IndieWebCamp Austin 2020 is a gathering for independent web creators of all kinds, from graphic artists, to designers, UX engineers, coders, hackers, to share ideas, actively work on creating for their ...

👓 Humane Ingenuity 9: GPT-2 and You | Dan Cohen | Buttondown

Read Humane Ingenuity 9: GPT-2 and You by Dan CohenDan Cohen (buttondown.email)
This newsletter has not been written by a GPT-2 text generator, but you can now find a lot of artificially created text that has been.

For those not familiar with GPT-2, it is, according to its creators OpenAI (a socially conscious artificial intelligence lab overseen by a nonprofit entity), “a large-scale unsupervised language model which generates coherent paragraphs of text.” Think of it as a computer that has consumed so much text that it’s very good at figuring out which words are likely to follow other words, and when strung together, these words create fairly coherent sentences and paragraphs that are plausible continuations of any initial (or “seed”) text.

This isn’t a very difficult problem and the underpinnings of it are well laid out by John R. Pierce in *[An Introduction to Information Theory: Symbols, Signals and Noise](https://amzn.to/32JWDSn)*. In it he has a lot of interesting tidbits about language and structure from an engineering perspective including the reason why crossword puzzles work.
November 13, 2019 at 08:33AM

The most interesting examples have been the weird ones (cf. HI7), where the language model has been trained on narrower, more colorful sets of texts, and then sparked with creative prompts. Archaeologist Shawn Graham, who is working on a book I’d like to preorder right now, An Enchantment of Digital Archaeology: Raising the Dead with Agent Based Models, Archaeogaming, and Artificial Intelligence, fed GPT-2 the works of the English Egyptologist Flinders Petrie (1853-1942) and then resurrected him at the command line for a conversation about his work. Robin Sloan had similar good fun this summer with a focus on fantasy quests, and helpfully documented how he did it.

Circle back around and read this when it comes out.

Similarly, these other references should be an interesting read as well.
November 13, 2019 at 08:36AM

From this perspective, GPT-2 says less about artificial intelligence and more about how human intelligence is constantly looking for, and accepting of, stereotypical narrative genres, and how our mind always wants to make sense of any text it encounters, no matter how odd. Reflecting on that process can be the source of helpful self-awareness—about our past and present views and inclinations—and also, some significant enjoyment as our minds spin stories well beyond the thrown-together words on a page or screen.

And it’s not just happening with text, but it also happens with speech as I’ve written before: Complexity isn’t a Vice: 10 Word Answers and Doubletalk in Election 2016 In fact, in this mentioned case, looking at transcripts actually helps to reveal that the emperor had no clothes because there’s so much missing from the speech that the text doesn’t have enough space to fill in the gaps the way the live speech did.
November 13, 2019 at 08:43AM

🔖 GLTR (glitter) v0.5

Bookmarked GLTR from MIT-IBM Watson AI Lab and HarvardNLP (gltr.io)
This demo enables forensic inspection of the visual footprint of a language model on input text to detect whether a text could be real or fake.

🔖 GLTR: Statistical Detection and Visualization of Generated Text | Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations

Bookmarked GLTR: Statistical Detection and Visualization of Generated Text by Sebastian Gehrmann (Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations (aclweb.org) [.pdf])

The rapid improvement of language models has raised the specter of abuse of text generation systems. This progress motivates the development of simple methods for detecting generated text that can be used by and explained to non-experts. We develop GLTR, a tool to support humans in detecting whether a text was generated by a model. GLTR applies a suite of baseline statistical methods that can detect generation artifacts across common sampling schemes. In a human-subjects study, we show that the annotation scheme provided by GLTR improves the human detection-rate of fake text from 54% to 72% without any prior training. GLTR is open-source and publicly deployed, and has already been widely used to detect generated outputs.

From pages 111–116; Florence, Italy, July 28 - August 2, 2019. Association for Computational Linguistics

🔖 Notes from the quest factory | Robin Sloan

Bookmarked Notes from the quest factory by Robin Sloan (Year of the Meteor)
Tools and techniques related to AI text generation. I wrote this for like twelve people. Recently, I used an AI trained on fantasy novels to generate custom stories for about a thousand readers. The stories were appealingly strange, they came with maps (MAPS!), and they looked like this:

🔖 The Resurrection of Flinders Petrie | electricarchaeology.ca

Bookmarked The Resurrection of Flinders Petrie by Shawn Graham (electricarchaeology.ca)
The following is an extended excerpt from my book-in-progress, “An Enchantment of Digital Archaeology: Raising the Dead with Agent Based Models, Archaeogaming, and Artificial Intelligence”, which is under contract with Berghahn Books, New York, and is to see the light of day in the summer of 2020. I welcome your thoughts. The final form of this section will no doubt change by the time I get through the entire process. I use the term ‘golems’ earlier in the book to describe the agents of agent based modeling, which I then translate into archaeogames, which then I muse might be powered by neural network models of language like GPT-2.

👓 Werner Herzog on ‘The Mandlorian’ and Why He Hasn’t Seen ‘Star Wars’ | Variety

Read Werner Herzog on Why He Didn’t Need to See ‘Star Wars’ Films for ‘The Mandalorian’ Role (Variety)
At first sight, playing a vital character in Jon Favreau’s “The Mandalorian,” Disney’s live-action “Star Wars” series, which the studio is using to launch its ambitious streaming venture, might appear to be an odd move for Werner Herzog.

Do you watch any television?
I do, I watch the news from different sources. Sometimes I see things that are completely against my cultural nature. I was raised with Latin and Ancient Greek and poetry from Greek antiquity, but sometimes, just to see the world I live in, I watch “WrestleMania.”

WrestleMania! This has to be the quote of the year from Werner Hertzog.
November 12, 2019 at 10:35AM

❤️ vboykis tweeted I can forgive Twitter for stuff like,,,destroying the free world and inciting cancel culture

Liked a tweet by Vicki Boykis on TwitterVicki Boykis on Twitter (Twitter)

Vicki, I’m sure you mentioned it purely for your awesome and inimitable snark and you’re obviously otherwise aware… but for everyone else who’s suffering:

Why not keep your avatar on a website you own and control? If it’s at a permalink you control, you can even replace the photo and those who hotlink/transclude it will allow you to update it automatically over time. As an example, I keep one of me at https://www.boffosocko.com/logo.jpg. Having a permalink to my own avatar was the only reason I got a website, and now look what I’ve gotten myself into…

If I recall correctly, when you delete or replace those Twitter avatars, the old links go dead and they generate a new link anyway.

👓 Broadcast, cable news networks to preempt regular programming for Trump impeachment coverage | The Hill

Read Broadcast, cable news networks to preempt regular programming for Trump impeachment coverage (TheHill)
ABC, CBS, NBC and PBS on Wednesday will preempt their regularly scheduled programming for live coverage of the House Intelligence Committee's open impeachment hearings of President Trump.

❤️ dimensionmedia tweeted Armchair WordCampers: Discover #WordPress friends in California w/ @WordCampRS

Liked a tweet by David Bisset on Twitter (Twitter)

👓 Lt. Colonel Vindman Fired | Daily Kos

Read Lt. Colonel Vindman Fired (dailykos.com)

What bugs me even more than the firing of Vindman for just doing his job, protecting the national security of the U.S., is the continued gaslighting, saying that the firing was not retaliation, but just a routine personnel move.

This is so patently a lie, that one would think O’Brien would be ashamed to let it out of his mouth.

But you check your integrity at the door to stay in the employ of the Orange Mousseolini.

More likely he was retasked, but still a retaliatory move…

👓 Limits, schlimits: It’s time to rethink how we teach calculus | Ars Technica

Read Limits, schlimits: It’s time to rethink how we teach calculus by Jennifer OuletteJennifer Oulette (Ars Technica)
Ars chats with math teacher Ben Orlin about his book Change Is the Only Constant.

Finally, I decided to build it around all my favorite stories that touched on calculus, stories that get passed around in the faculty lounge, or the things that the professor mentions off-hand during a lecture. I realized that all those little bits of folklore tapped into something that really excited me about calculus. They have a time-tested quality to them where they’ve been told and retold, like an old folk song that has been sharpened over time.

And this is roughly how memory and teaching has always worked. Stories and repetition.
–November 11, 2019 at 09:56AM

👓 When You Give A Mouse A Domain | Greg McVerry

Bookmarked When You Give A Mouse A Domain by Greg McVerryGreg McVerry (mouseadomain.glitch.me)
She'll want a wesbite to go with it

A slick homage to Laura Numeroff’s children’s book If You Give a Mouse a Cookie. Great job Greg! This is hilarious.

​​​​​​​​​​​​

👓 Unfollowing everyone on Twitter | Ryan Barrett

Read a post by Ryan BarrettRyan Barrett (snarfed.org)

A few days ago, I unfollowed everyone on Twitter, added them all to a list, and I now read that list instead. It’s shockingly better. Only their own tweets and retweets, in order. No ads, no "liked by," no "people you may know," no engagement hacking crap. It’s glorious.

Even better, when I inevitably end up in the home timeline anyway, it only has my own tweets and ads, nothing interesting. No dopamine outrage bullshit cycle to get caught up in.

Shh, don’t tell, I’m afraid some low level product manager at Twitter will discover this and "fix" lists like they "fixed" the home timeline a while back.

There are a couple drawbacks. I lost a few people I followed whose accounts are protected; I need to find and re-follow them. Also this evidently makes it harder for people to DM me, somehow. Not sure how, I don’t use Twitter DM much.

Still. Glorious.

This is pretty inspiring. Thinking about doing it myself, though I’ll have to be careful about private accounts so I don’t unfollow them. I do also wish that feed readers had a better way to display Tweets.