👓 Isoroku Yamamoto | Wikipedia

Read Isoroku Yamamoto (Wikipedia)

Isoroku Yamamoto (山本 五十六 Yamamoto Isoroku, April 4, 1884 – April 18, 1943) was a Japanese Marshal Admiral of the Imperial Japanese Navy (IJN) and the commander-in-chief of the Combined Fleet during World War II until his death.

Yamamoto held several important posts in the IJN, and undertook many of its changes and reorganizations, especially its development of naval aviation. He was the commander-in-chief during the early years of the Pacific War and oversaw major engagements including the attack on Pearl Harbor and the Battle of Midway. He was killed when American code breakers identified his flight plans, enabling the United States Army Air Forces to shoot down his plane. His death was a major blow to Japanese military morale during World War II.

👓 Old Pasadena building sells for $12.55 million | Pasadena Star News

Read Old Pasadena building sells for $12.55 million (Pasadena Star News)
The property had three shops on the ground floor and office space above them.
Originally bookmarked on 11/13/19 at 5:13 PM.

👓 An Apple store employee ‘helped’ a customer — by texting himself an intimate photo from her phone | Washington Post

Read An Apple store employee ‘helped’ a customer — by texting himself an intimate photo from her phone (Washington Post)
In a statement, Apple said the employee was "no longer associated with our company."
I’m surprised that this isn’t reported more often… I can only imagine the thousands of cases that aren’t noticed or reported.

🎧 Mindscape 68 | Melanie Mitchell on Artificial Intelligence and the Challenge of Common Sense

Listened to Mindscape 68 | Melanie Mitchell on Artificial Intelligence and the Challenge of Common Sense by Sean Carroll from preposterousuniverse.com

Artificial intelligence is better than humans at playing chess or go, but still has trouble holding a conversation or driving a car. A simple way to think about the discrepancy is through the lens of “common sense” — there are features of the world, from the fact that tables are solid to the prediction that a tree won’t walk across the street, that humans take for granted but that machines have difficulty learning. Melanie Mitchell is a computer scientist and complexity researcher who has written a new book about the prospects of modern AI. We talk about deep learning and other AI strategies, why they currently fall short at equipping computers with a functional “folk physics” understanding of the world, and how we might move forward.

Melanie Mitchell received her Ph.D. in computer science from the University of Michigan. She is currently a professor of computer science at Portland State University and an external professor at the Santa Fe Institute. Her research focuses on genetic algorithms, cellular automata, and analogical reasoning. She is the author of An Introduction to Genetic Algorithms, Complexity: A Guided Tour, and most recently Artificial Intelligence: A Guide for Thinking Humans. She originated the Santa Fe Institute’s Complexity Explorer project, on online learning resource for complex systems.

One of the more interesting interviews of Dr. Mitchell with respect to her excellent new book Dr. Carroll gets the space she’s working in and is able to have a more substantive conversation as a result.

👓 Newsletter: IndieWebCamp | Micro Monday

Read Newsletter: IndieWebCamp (monday.micro.blog)
IndieWebCamp Austin will be February 22-23, 2020. Register now for just $10 for the weekend: IndieWebCamp Austin 2020 is a gathering for independent web creators of all kinds, from graphic artists, to designers, UX engineers, coders, hackers, to share ideas, actively work on creating for their ...

👓 Humane Ingenuity 9: GPT-2 and You | Dan Cohen | Buttondown

Read Humane Ingenuity 9: GPT-2 and You by Dan CohenDan Cohen (buttondown.email)
This newsletter has not been written by a GPT-2 text generator, but you can now find a lot of artificially created text that has been.

For those not familiar with GPT-2, it is, according to its creators OpenAI (a socially conscious artificial intelligence lab overseen by a nonprofit entity), “a large-scale unsupervised language model which generates coherent paragraphs of text.” Think of it as a computer that has consumed so much text that it’s very good at figuring out which words are likely to follow other words, and when strung together, these words create fairly coherent sentences and paragraphs that are plausible continuations of any initial (or “seed”) text.

This isn’t a very difficult problem and the underpinnings of it are well laid out by John R. Pierce in *[An Introduction to Information Theory: Symbols, Signals and Noise](https://amzn.to/32JWDSn)*. In it he has a lot of interesting tidbits about language and structure from an engineering perspective including the reason why crossword puzzles work.
November 13, 2019 at 08:33AM

The most interesting examples have been the weird ones (cf. HI7), where the language model has been trained on narrower, more colorful sets of texts, and then sparked with creative prompts. Archaeologist Shawn Graham, who is working on a book I’d like to preorder right now, An Enchantment of Digital Archaeology: Raising the Dead with Agent Based Models, Archaeogaming, and Artificial Intelligence, fed GPT-2 the works of the English Egyptologist Flinders Petrie (1853-1942) and then resurrected him at the command line for a conversation about his work. Robin Sloan had similar good fun this summer with a focus on fantasy quests, and helpfully documented how he did it.

Circle back around and read this when it comes out.

Similarly, these other references should be an interesting read as well.
November 13, 2019 at 08:36AM

From this perspective, GPT-2 says less about artificial intelligence and more about how human intelligence is constantly looking for, and accepting of, stereotypical narrative genres, and how our mind always wants to make sense of any text it encounters, no matter how odd. Reflecting on that process can be the source of helpful self-awareness—about our past and present views and inclinations—and also, some significant enjoyment as our minds spin stories well beyond the thrown-together words on a page or screen.

And it’s not just happening with text, but it also happens with speech as I’ve written before: Complexity isn’t a Vice: 10 Word Answers and Doubletalk in Election 2016 In fact, in this mentioned case, looking at transcripts actually helps to reveal that the emperor had no clothes because there’s so much missing from the speech that the text doesn’t have enough space to fill in the gaps the way the live speech did.
November 13, 2019 at 08:43AM

🎧 episode 12: Kleos and Nostos | Literature and History

Listened to episode 12: Kleos and Nostos by Doug MetzgerDoug Metzger from literatureandhistory.com
The Odyssey, Part 1 of 3. Adventure, monsters, temptresses, and a whole lot of wine-dark Aegean. Learn all about the world of Homer’s Odyssey.

A dramatically different type of story told here versus the Illiad.

👓 Windows 10 November 2019 Update is now available as more of a service pack | The Verge

Read Windows 10 November 2019 Update is now available as more of a service pack by Tom Warren (The Verge)
It’s a minor update that’s more like the traditional Windows service pack.

👓 The 5 college majors American students most regret picking | CNBC

Read The 5 college majors American students most regret picking by Jessica Dickler (CNBC)
English, communications, biological sciences and law were among the most regretted college majors, according to a recent survey. On the upside, students who focused on computer science, business, engineering and health administration felt very good about their choices.

👓 Days of Our Lives cast let go from contracts, as the show struggles with ratings | CNBC

Read Days of Our Lives cast let go from contracts, as the show struggles with ratings by Díamaris Martino (CNBC)
The entire cast has been released from their contracts, although the show has not bee canceled yet, according to an exclusive by TVLine.