The Odyssey, Part 1 of 3. Adventure, monsters, temptresses, and a whole lot of wine-dark Aegean. Learn all about the world of Homer’s Odyssey.
Month: November 2019
📺 "The Great British Baking Show" The Great Christmas Bake Off | Netflix
Directed by Jeanette Goulbourn. With Noel Fielding, Paul Hollywood, Prue Leith, Sandi Toksvig.
To some extent, just like you did with Twitter and all your other social networks, you’ll likely have to (re-)”build” and “discover” your audience and people you want to interact with. The nice part about it is that it’s built on open protocols, so as more and more sites and services support them, you’ll be able to interact from one place instead of the typical 4 or more.
Personally, while I highly leverage m.b. and its many discovery aspects, I do it with my own feed reader where I pick and choose who I follow (whether they’re on Twitter, Instagram, micro.blog, or their own site) and then read them all there. Then I’m using my own website to collect, write, respond, and interact. It’s taken me a while to reframe how I use the social layers of the internet, but ultimately I find it much more healthy and rewarding.
🔖 The Resurrection of Flinders Petrie | electricarchaeology.ca
The following is an extended excerpt from my book-in-progress, “An Enchantment of Digital Archaeology: Raising the Dead with Agent Based Models, Archaeogaming, and Artificial Intelligence”, which is under contract with Berghahn Books, New York, and is to see the light of day in the summer of 2020. I welcome your thoughts. The final form of this section will no doubt change by the time I get through the entire process. I use the term ‘golems’ earlier in the book to describe the agents of agent based modeling, which I then translate into archaeogames, which then I muse might be powered by neural network models of language like GPT-2.
🔖 Notes from the quest factory | Robin Sloan
Tools and techniques related to AI text generation. I wrote this for like twelve people. Recently, I used an AI trained on fantasy novels to generate custom stories for about a thousand readers. The stories were appealingly strange, they came with maps (MAPS!), and they looked like this:
🔖 GLTR: Statistical Detection and Visualization of Generated Text | Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations
The rapid improvement of language models has raised the specter of abuse of text generation systems. This progress motivates the development of simple methods for detecting generated text that can be used by and explained to non-experts. We develop GLTR, a tool to support humans in detecting whether a text was generated by a model. GLTR applies a suite of baseline statistical methods that can detect generation artifacts across common sampling schemes. In a human-subjects study, we show that the annotation scheme provided by GLTR improves the human detection-rate of fake text from 54% to 72% without any prior training. GLTR is open-source and publicly deployed, and has already been widely used to detect generated outputs.
From pages 111–116; Florence, Italy, July 28 - August 2, 2019. Association for Computational Linguistics
🔖 GLTR (glitter) v0.5
This demo enables forensic inspection of the visual footprint of a language model on input text to detect whether a text could be real or fake.
👓 Humane Ingenuity 9: GPT-2 and You | Dan Cohen | Buttondown
This newsletter has not been written by a GPT-2 text generator, but you can now find a lot of artificially created text that has been.
For those not familiar with GPT-2, it is, according to its creators OpenAI (a socially conscious artificial intelligence lab overseen by a nonprofit entity), “a large-scale unsupervised language model which generates coherent paragraphs of text.” Think of it as a computer that has consumed so much text that it’s very good at figuring out which words are likely to follow other words, and when strung together, these words create fairly coherent sentences and paragraphs that are plausible continuations of any initial (or “seed”) text.
This isn’t a very difficult problem and the underpinnings of it are well laid out by John R. Pierce in *[An Introduction to Information Theory: Symbols, Signals and Noise](https://amzn.to/32JWDSn)*. In it he has a lot of interesting tidbits about language and structure from an engineering perspective including the reason why crossword puzzles work.
November 13, 2019 at 08:33AM
The most interesting examples have been the weird ones (cf. HI7), where the language model has been trained on narrower, more colorful sets of texts, and then sparked with creative prompts. Archaeologist Shawn Graham, who is working on a book I’d like to preorder right now, An Enchantment of Digital Archaeology: Raising the Dead with Agent Based Models, Archaeogaming, and Artificial Intelligence, fed GPT-2 the works of the English Egyptologist Flinders Petrie (1853-1942) and then resurrected him at the command line for a conversation about his work. Robin Sloan had similar good fun this summer with a focus on fantasy quests, and helpfully documented how he did it.
Circle back around and read this when it comes out.
Similarly, these other references should be an interesting read as well.
November 13, 2019 at 08:36AM
From this perspective, GPT-2 says less about artificial intelligence and more about how human intelligence is constantly looking for, and accepting of, stereotypical narrative genres, and how our mind always wants to make sense of any text it encounters, no matter how odd. Reflecting on that process can be the source of helpful self-awareness—about our past and present views and inclinations—and also, some significant enjoyment as our minds spin stories well beyond the thrown-together words on a page or screen.
And it’s not just happening with text, but it also happens with speech as I’ve written before: Complexity isn’t a Vice: 10 Word Answers and Doubletalk in Election 2016 In fact, in this mentioned case, looking at transcripts actually helps to reveal that the emperor had no clothes because there’s so much missing from the speech that the text doesn’t have enough space to fill in the gaps the way the live speech did.
November 13, 2019 at 08:43AM
👓 Newsletter: IndieWebCamp | Micro Monday
IndieWebCamp Austin will be February 22-23, 2020. Register now for just $10 for the weekend: IndieWebCamp Austin 2020 is a gathering for independent web creators of all kinds, from graphic artists, to designers, UX engineers, coders, hackers, to share ideas, actively work on creating for their ...
🎧 Mindscape 68 | Melanie Mitchell on Artificial Intelligence and the Challenge of Common Sense
Artificial intelligence is better than humans at playing chess or go, but still has trouble holding a conversation or driving a car. A simple way to think about the discrepancy is through the lens of “common sense” — there are features of the world, from the fact that tables are solid to the prediction that a tree won’t walk across the street, that humans take for granted but that machines have difficulty learning. Melanie Mitchell is a computer scientist and complexity researcher who has written a new book about the prospects of modern AI. We talk about deep learning and other AI strategies, why they currently fall short at equipping computers with a functional “folk physics” understanding of the world, and how we might move forward.
Melanie Mitchell received her Ph.D. in computer science from the University of Michigan. She is currently a professor of computer science at Portland State University and an external professor at the Santa Fe Institute. Her research focuses on genetic algorithms, cellular automata, and analogical reasoning. She is the author of An Introduction to Genetic Algorithms, Complexity: A Guided Tour, and most recently Artificial Intelligence: A Guide for Thinking Humans. She originated the Santa Fe Institute’s Complexity Explorer project, on online learning resource for complex systems.
it's not just digital, it's electric!
I’m an associate prof in the Department of History at Carleton University. My google scholar page is here. In 2016 I was named a Careton University Teaching Fellow, and I was recipient of the Provost’s Fellowship in Teaching Award. I teach in the public history and digital humanities programmes (I’m also cross-appointed to Greek and Roman Studies). If you’re interested in doing an MA or PhD with me, get in touch. I may have some funding to support you.
My github account is littered with repos I’ve forked from other people because they were/are interesting. I led the Open Digital Archaeology Textbook Environment project; I’m currently researching the trade in human remains. I’m also starting some work in computational creativity with legacy archaeological data. I co-wrote The Historian’s Macroscope. I have a book on ‘failing gloriously’ and another one on practical digital necromancy coming out in the next year.
I always welcome email from interested folks: shawn / dot / graham /at/ carleton /dot/ ca
There was some bit of pride swallowing to launch a patreon begging button campaign, maybe with the idea I could solicit enough to devote maybe 1/3 my time to my tech projects. Each ding is greatly …