🎧 episode 12: Kleos and Nostos | Literature and History

Listened to episode 12: Kleos and Nostos by Doug MetzgerDoug Metzger from literatureandhistory.com
The Odyssey, Part 1 of 3. Adventure, monsters, temptresses, and a whole lot of wine-dark Aegean. Learn all about the world of Homer’s Odyssey.

A dramatically different type of story told here versus the Illiad.
Replied to a post by Jörg WurzerJörg Wurzer (jwurzer.micro.blog)
I’m struggling with micro.blog. I have tried it since it was stsrted after the Kickstarter campaign. Unfortunately it’s not possible to get in contact with people or to reach any audience. I thnk the conceptual problem of micro.blog is, that I can’t search for interesting people and posts. Maybe it’s time to say good bye to micro.blog. My hope was to have an alternative to Twitter, without censorship and manipulation.
@jwurzer I recall that @macgenie had a good piece called Where Discover Doesn’t Help that may also be useful to you. I had responded to it with some related ideas around Micro Monday. Another good place to find people is to visit the micro.blog profile pages of people you do find interesting and then click through the “Following XYZ users you aren’t following” to see people who may be similar.

To some extent, just like you did with Twitter and all your other social networks, you’ll likely have to (re-)”build” and “discover” your audience and people you want to interact with. The nice part about it is that it’s built on open protocols, so as more and more sites and services support them, you’ll be able to interact from one place instead of the typical 4 or more.

Personally, while I highly leverage m.b. and its many discovery aspects, I do it with my own feed reader where I pick and choose who I follow (whether they’re on Twitter, Instagram, micro.blog, or their own site) and then read them all there. Then I’m using my own website to collect, write, respond, and interact. It’s taken me a while to reframe how I use the social layers of the internet, but ultimately I find it much more healthy and rewarding.

🔖 The Resurrection of Flinders Petrie | electricarchaeology.ca

Bookmarked The Resurrection of Flinders Petrie by Shawn Graham (electricarchaeology.ca)
The following is an extended excerpt from my book-in-progress, “An Enchantment of Digital Archaeology: Raising the Dead with Agent Based Models, Archaeogaming, and Artificial Intelligence”, which is under contract with Berghahn Books, New York, and is to see the light of day in the summer of 2020. I welcome your thoughts. The final form of this section will no doubt change by the time I get through the entire process. I use the term ‘golems’ earlier in the book to describe the agents of agent based modeling, which I then translate into archaeogames, which then I muse might be powered by neural network models of language like GPT-2.

🔖 Notes from the quest factory | Robin Sloan

Bookmarked Notes from the quest factory by Robin Sloan (Year of the Meteor)
Tools and techniques related to AI text generation. I wrote this for like twelve people. Recently, I used an AI trained on fantasy novels to generate custom stories for about a thousand readers. The stories were appealingly strange, they came with maps (MAPS!), and they looked like this:

🔖 GLTR: Statistical Detection and Visualization of Generated Text | Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations

Bookmarked GLTR: Statistical Detection and Visualization of Generated Text by Sebastian Gehrmann (Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations (aclweb.org) [.pdf])

The rapid improvement of language models has raised the specter of abuse of text generation systems. This progress motivates the development of simple methods for detecting generated text that can be used by and explained to non-experts. We develop GLTR, a tool to support humans in detecting whether a text was generated by a model. GLTR applies a suite of baseline statistical methods that can detect generation artifacts across common sampling schemes. In a human-subjects study, we show that the annotation scheme provided by GLTR improves the human detection-rate of fake text from 54% to 72% without any prior training. GLTR is open-source and publicly deployed, and has already been widely used to detect generated outputs.

From pages 111–116; Florence, Italy, July 28 - August 2, 2019. Association for Computational Linguistics

👓 Humane Ingenuity 9: GPT-2 and You | Dan Cohen | Buttondown

Read Humane Ingenuity 9: GPT-2 and You by Dan CohenDan Cohen (buttondown.email)
This newsletter has not been written by a GPT-2 text generator, but you can now find a lot of artificially created text that has been.

For those not familiar with GPT-2, it is, according to its creators OpenAI (a socially conscious artificial intelligence lab overseen by a nonprofit entity), “a large-scale unsupervised language model which generates coherent paragraphs of text.” Think of it as a computer that has consumed so much text that it’s very good at figuring out which words are likely to follow other words, and when strung together, these words create fairly coherent sentences and paragraphs that are plausible continuations of any initial (or “seed”) text.

This isn’t a very difficult problem and the underpinnings of it are well laid out by John R. Pierce in *[An Introduction to Information Theory: Symbols, Signals and Noise](https://amzn.to/32JWDSn)*. In it he has a lot of interesting tidbits about language and structure from an engineering perspective including the reason why crossword puzzles work.
November 13, 2019 at 08:33AM

The most interesting examples have been the weird ones (cf. HI7), where the language model has been trained on narrower, more colorful sets of texts, and then sparked with creative prompts. Archaeologist Shawn Graham, who is working on a book I’d like to preorder right now, An Enchantment of Digital Archaeology: Raising the Dead with Agent Based Models, Archaeogaming, and Artificial Intelligence, fed GPT-2 the works of the English Egyptologist Flinders Petrie (1853-1942) and then resurrected him at the command line for a conversation about his work. Robin Sloan had similar good fun this summer with a focus on fantasy quests, and helpfully documented how he did it.

Circle back around and read this when it comes out.

Similarly, these other references should be an interesting read as well.
November 13, 2019 at 08:36AM

From this perspective, GPT-2 says less about artificial intelligence and more about how human intelligence is constantly looking for, and accepting of, stereotypical narrative genres, and how our mind always wants to make sense of any text it encounters, no matter how odd. Reflecting on that process can be the source of helpful self-awareness—about our past and present views and inclinations—and also, some significant enjoyment as our minds spin stories well beyond the thrown-together words on a page or screen.

And it’s not just happening with text, but it also happens with speech as I’ve written before: Complexity isn’t a Vice: 10 Word Answers and Doubletalk in Election 2016 In fact, in this mentioned case, looking at transcripts actually helps to reveal that the emperor had no clothes because there’s so much missing from the speech that the text doesn’t have enough space to fill in the gaps the way the live speech did.
November 13, 2019 at 08:43AM

👓 Newsletter: IndieWebCamp | Micro Monday

Read Newsletter: IndieWebCamp (monday.micro.blog)
IndieWebCamp Austin will be February 22-23, 2020. Register now for just $10 for the weekend: IndieWebCamp Austin 2020 is a gathering for independent web creators of all kinds, from graphic artists, to designers, UX engineers, coders, hackers, to share ideas, actively work on creating for their ...

🎧 Mindscape 68 | Melanie Mitchell on Artificial Intelligence and the Challenge of Common Sense

Listened to Mindscape 68 | Melanie Mitchell on Artificial Intelligence and the Challenge of Common Sense by Sean Carroll from preposterousuniverse.com

Artificial intelligence is better than humans at playing chess or go, but still has trouble holding a conversation or driving a car. A simple way to think about the discrepancy is through the lens of “common sense” — there are features of the world, from the fact that tables are solid to the prediction that a tree won’t walk across the street, that humans take for granted but that machines have difficulty learning. Melanie Mitchell is a computer scientist and complexity researcher who has written a new book about the prospects of modern AI. We talk about deep learning and other AI strategies, why they currently fall short at equipping computers with a functional “folk physics” understanding of the world, and how we might move forward.

Melanie Mitchell received her Ph.D. in computer science from the University of Michigan. She is currently a professor of computer science at Portland State University and an external professor at the Santa Fe Institute. Her research focuses on genetic algorithms, cellular automata, and analogical reasoning. She is the author of An Introduction to Genetic Algorithms, Complexity: A Guided Tour, and most recently Artificial Intelligence: A Guide for Thinking Humans. She originated the Santa Fe Institute’s Complexity Explorer project, on online learning resource for complex systems.

One of the more interesting interviews of Dr. Mitchell with respect to her excellent new book Dr. Carroll gets the space she’s working in and is able to have a more substantive conversation as a result.
Followed Shawn Graham (electricarchaeology.ca)

Shawn Graham headshot

it's not just digital, it's electric!

I’m an associate prof in the Department of History at Carleton University. My google scholar page is here. In 2016 I was named a Careton University Teaching Fellow, and I was recipient of the Provost’s Fellowship in Teaching Award. I teach in the public history and digital humanities programmes (I’m also cross-appointed to Greek and Roman Studies). If you’re interested in doing an MA or PhD with me, get in touch. I may have some funding to support you.

My github account is littered with repos I’ve forked from other people because they were/are interesting. I led the Open Digital Archaeology Textbook Environment project; I’m currently researching the trade in human remains. I’m also starting some work in computational creativity with legacy archaeological data. I co-wrote The Historian’s Macroscope. I have a book on ‘failing gloriously’ and another one on practical digital necromancy coming out in the next year.

I always welcome email from interested folks: shawn / dot / graham /at/ carleton /dot/ ca