RSVPed Attending Innovate Pasaddena: Creating Lifelike Animate Companions that Enhance our Lives

Online event: July 17, 2020 at 08:30AM - 10:00AM

Moxie is an animate companion that promotes social and emotional development through play-based learning. Paolo Pirjanian will show a demo and speak to his journey developing Moxie and his vision for how technology can improve and enhance our lives.

We're at a tipping point of a paradigm shift in the way we will interact with technology. Embodied is aiming to lead this charge through an advanced social interface that respects humans’ natural modes of interaction, beyond simple verbal commands, to enable the next generation of computing, and to power a new class of machines that will change the world around us. Paolo will discuss how he and his team at Embodied are rethinking and reinventing how human-machine interaction is done - starting with the recent announcement of Moxie.

Moxie is an animate companion that helps children build social, emotional, and cognitive skills through everyday play-based learning and engaging content developed in association with experts in child development and education. Embodied has assembled a world class team of experts in engineering, technology, game design, and entertainment to bring to life a robot with machine learning technology that allows it to perceive, process and respond to natural conversation, eye contact, facial expressions and other behavior as well as recognize and recall people, places, and things.

Paolo will speak to his journey developing Moxie and his vision for how technology can improve and enhance our lives.

BIO: Paolo Pirjanian
Paolo Pirjanian is the former CTO of iRobot and early leader in the field of consumer robotics with 16+ years of experience developing and commercializing cutting-edge home robots. He led world-class teams and companies at iRobot®, Evolution Robotics®, and others. In 2016, Paolo founded Embodied, Inc. with the vision to build socially and emotionally intelligent companions that improve care and wellness and enhance our daily lives.

Bookmarked Explainable Artificial Intelligence and Machine Learning: A reality rooted perspective by Frank Emmert-Streib, Olli Yli-Harja, Matthias Dehmer (arXiv.org)
We are used to the availability of big data generated in nearly all fields of science as a consequence of technological progress. However, the analysis of such data possess vast challenges. One of these relates to the explainability of artificial intelligence (AI) or machine learning methods. Currently, many of such methods are non-transparent with respect to their working mechanism and for this reason are called black box models, most notably deep learning methods. However, it has been realized that this constitutes severe problems for a number of fields including the health sciences and criminal justice and arguments have been brought forward in favor of an explainable AI. In this paper, we do not assume the usual perspective presenting explainable AI as it should be, but rather we provide a discussion what explainable AI can be. The difference is that we do not present wishful thinking but reality grounded properties in relation to a scientific theory beyond physics.
Read Meet Leo, Your AI Research Assistant (blog.feedly.com)

Goodbye Information Overload

Keeping up with topics and trends you care about within a sea of articles can be overwhelming and time-consuming.

Filtering out the noise so you can focus on what really matters is a challenge we are deeply passionate about.

Today, we are delighted to announce Leo, your AI research assistant.

This is kind of cool, but I think I’d want more manual control over what I’m reading and seeing and perhaps a separate discovery mode to do this sort of functionality at times.

Bookmarked Gods and Robots: Myths, Machines, and Ancient Dreams of Technology by Adrienne MayorAdrienne Mayor (Princeton University Press)

The fascinating untold story of how the ancients imagined robots and other forms of artificial life—and even invented real automated machines

The first robot to walk the earth was a bronze giant called Talos. This wondrous machine was created not by MIT Robotics Lab, but by Hephaestus, the Greek god of invention. More than 2,500 years ago, long before medieval automata, and centuries before technology made self-moving devices possible, Greek mythology was exploring ideas about creating artificial life—and grappling with still-unresolved ethical concerns about biotechne, “life through craft.” In this compelling, richly illustrated book, Adrienne Mayor tells the fascinating story of how ancient Greek, Roman, Indian, and Chinese myths envisioned artificial life, automata, self-moving devices, and human enhancements—and how these visions relate to and reflect the ancient invention of real animated machines.

As early as Homer, Greeks were imagining robotic servants, animated statues, and even ancient versions of Artificial Intelligence, while in Indian legend, Buddha’s precious relics were defended by robot warriors copied from Greco-Roman designs for real automata. Mythic automata appear in tales about Jason and the Argonauts, Medea, Daedalus, Prometheus, and Pandora, and many of these machines are described as being built with the same materials and methods that human artisans used to make tools and statues. And, indeed, many sophisticated animated devices were actually built in antiquity, reaching a climax with the creation of a host of automata in the ancient city of learning, Alexandria, the original Silicon Valley.

A groundbreaking account of the earliest expressions of the timeless impulse to create artificial life, Gods and Robots reveals how some of today’s most advanced innovations in robotics and AI were foreshadowed in ancient myth—and how science has always been driven by imagination. This is mythology for the age of AI.

Bookcover of Gods and Robots

Sean Carroll Mindscape Episode 40: Adrienne Mayor on Gods and Robots in Ancient Mythology (#)
Listened to Episode 40: Adrienne Mayor on Gods and Robots in Ancient Mythology from Sean Carroll's Mindscape

The modern world is full of technology, and also with anxiety about technology. We worry about robot uprisings and artificial intelligence taking over, and we contemplate what it would mean for a computer to be conscious or truly human. It should probably come as no surprise that these ideas aren’t new to modern society — they go way back, at least to the stories and mythologies of ancient Greece. Today’s guest, Adrienne Mayor, is a folklorist and historian of science, whose recent work has been on robots and artificial humans in ancient mythology. From the bronze warrior Talos to the evil fembot Pandora, mythology is rife with stories of artificial beings. It’s both fun and useful to think about our contemporary concerns in light of these ancient tales.

Adrienne Mayor is a Research Scholar Classics and History and Philosophy of Science at Stanford University. She is also a Berggruen Fellow at Stanford’s Center for Advanced Study in the Behavioral Sciences. Her work has encompasses fossil traditions in classical antiquity and Native America, the origins of biological weapons, and the historical precursors of the stories of Amazon warriors. In 2009 she was a finalist for the National Book Award.

I’d never considered it before, but I’m curious if the idea of the bolt on Talos’ leg bears any influence on the bolts frequently seen on Frankenstein’s monster? Naturally they would seem to be there as a means of charging or animating him, but did they have an powers beyond that? Or was he, once jump-started, to run indefinitely? Bryan Alexander recently called out his diet (of apples and nuts), so presumably once he was brought to life, he was able to live the same way as a human.
Read Humane Ingenuity 14: Adding Dimensions by Dan Cohen (buttondown.email)
In HI12 I mentioned Ben Shneiderman’s talk on automation and agency, and he kindly sent me the full draft of the article he is writing on this topic. New to me was the Sheridan-Verplank Scale of Autonomy, which, come on, sounds like something straight out of Blade Runner:
RSVPed Attending Applications of Big Data and AI in Media & Entertainment

Details
A round table discussion with experts from the entertainment and media industry, followed by a chance to network and interact.
Cahill Center for Astronomy and Astrophysics, 1216 E California Blvd, Pasadena, CA 91125, USA
January 22, 2020 at 07:00PM- January 22, 2020 at 09:00PM

RSVPed Attending Innovate Pasadena: The Impact of AI

Details:
Cross Campus, 85 N. Raymond Avenue, Pasadena, CA, US
January 17, 2020 at 08:15AM- January 17, 2020 at 10:15AM

Artificial Intelligence has the potential to enhance life in ways that we are just beginning to explore. But along with its advantages comes new challenges for ethical system behavior. We must work together to mitigate the risks associated with AI solutions.

Bio: Maria Alvarez
As the General Manager of Shared Engineering Services in the AI + Research division at Microsoft, Maria and her team provide services and programs that support Search, Ads, News, Maps, and Microsoft Research. Maria is a technical leader with over 20 years of experience. Prior to joining Microsoft in 2011, she was promoted through positions at Symantec, HP, CoCreate Software, and Yahoo! She also served as CTO of Panda Security in Spain. Maria has a B.S. in Information Systems and a M.S. in CS from California State Polytechnic University.

Bookmarked EP33 Melanie Mitchell on the Elements of AI by Jim Rutt (The Jim Rutt Show)
Melanie Mitchell & Jim talk about the many approaches to creating AI, hype cycles, self-driving cars, what can be learned from human intelligence, & more!

Read What Happened to Tagging? by Alexandra SamuelAlexandra Samuel (JSTOR Daily)
Fourteen years ago, a dozen geeks gathered around our dining table for Tagsgiving dinner. No, that’s not a typo. In 2005, my husband and I celebrated Thanksgiving as “Tagsgiving,” in honor of the web technology that had given birth to our online community development shop. I invited our guests...
It almost sounds like Dr. Samuel could be looking for the IndieWeb community, but just hasn’t run across it yet. Since she’s writing about tags, I can’t help but mischievously snitch tagging it to her, though I’ll do so only in hopes that it might make the internet all the better for it.

Tagging systems were “folksonomies:” chaotic, self-organizing categorization schemes that grew from the bottom up.

There’s something that just feels so wrong in this article about old school tagging and the blogosphere that has a pullquote meant to encourage one to Tweet the quote.
–December 04, 2019 at 11:03AM

I literally couldn’t remember when I’d last looked at my RSS subscriptions.
On the surface, that might seem like a win: Instead of painstakingly curating my own incoming news, I can effortlessly find an endless supply of interesting, worthwhile content that the algorithm finds for me. The problem, of course, is that the algorithm isn’t neutral: It’s the embodiment of Facebook and Twitter’s technology, data analysis, and most crucial, business model. By relying on the algorithm, instead of on tags and RSS, I’m letting an army of web developers, business strategists, data scientists, and advertisers determine what gets my attention. I’m leaving myself vulnerable to misinformation, and manipulation, and giving up my power of self-determination.

–December 04, 2019 at 11:34AM

You might connect with someone who regularly used the same tags that you did, but that was because they shared your interests, not because they had X thousand followers.

An important and sadly underutilized means of discovery. –December 04, 2019 at 11:35AM

I find it interesting that Alexandra’s Twitter display name is AlexandraSamuel.com while the top of her own website has the apparent title @AlexandraSamuel. I don’t think I’ve seen a crossing up of those two sorts of identities before though it has become more common for people to use their own website name as their Twitter name. Greg McVerry is another example of this.

Thanks to Jeremy Cherfas[1] and Aaron Davis[2] for the links to this piece. I suspect that Dr. Samuel will appreciate that we’re talking about this piece using our own websites and tagging them with our own crazy taxonomies. I’m feeling nostalgic now for the old Technorati…

Read What Happened to Tagging? by Aaron DavisAaron Davis (Read Write Collect)
Alexandra Samuel reflects on tagging and its origins as a backbone to the social web. Along with RSS, tags allowed users to connect and collate content using such tools as feed readers. This all changed with the advent of social media and the algorithmically curated news feed. Samuel wonders if we h...

Alexander Samuel reflects on tagging and its origins as a backbone to the social web. Along with RSS, tags allowed users to connect and collate content using such tools as feed readers. This all changed with the advent of social media and the algorithmically curated news feed.

Tags were used for discovery of specific types of content. Who needs that now that our new overlords of artificial intelligence and algorithmic feeds can tell us what we want to see?!

Of course we still need tags!!! How are you going to know serendipitously that you need more poetry in your life until you run into the tag on a service like IndieWeb.xyz? An algorithmic feed is unlikely to notice–or at least in my decade of living with them I’ve yet to run into poetry in one.
–December 04, 2019 at 10:56AM

Read The Evolving Exhibition of Us: A Decade of Sharing Pictures Online : Adjacent Issue 6 by Summer Bedard (itp.nyu.edu)
A deep examination and self-reflection on photo sharing of the last decade, Summer Bedard’s article looks at how the previously intimate, cumbersome experience has morphed into the edited, contrived perfection found on Instagram.

The explosion of people, marked a shift from having a community to having an audience. This ultimately changed the mental model of what gets posted. People act differently in their living room than they do on stage. They may feel more vulnerable and guarded. You’re sharing with a community, but working for an audience.

–November 28, 2019 at 09:42PM

I would love to see a future where enjoying photos becomes more like enjoying music. Spotify gives you an easy way to consider options by assessing your mood and putting together an appropriate playlist that feels personal. We could do the same for images. Can you imagine opening Spotify and having it blast a random song immediately? Our current Instagram home screen is the visual equivalent of a playlist mashup of country, classical, techno, hip hop, and polka. 

I like the idea of this. Can someone build it please?
–November 28, 2019 at 09:46PM

What if you could use AI to control the content in your feed? Dialing up or down whatever is most useful to you. If I’m on a budget, maybe I don’t want to see photos of friends on extravagant vacations. Or, if I’m trying to pay more attention to my health, encourage me with lots of salads and exercise photos. If I recently broke up with somebody, happy couple photos probably aren’t going to help in the healing process. Why can’t I have control over it all, without having to unfollow anyone. Or, opening endless accounts to separate feeds by topic. And if I want to risk seeing everything, or spend a week replacing my usual feed with images from a different culture, country, or belief system, couldn’t I do that, too? 

Some great blue sky ideas here.
–November 28, 2019 at 09:48PM

🎧 Mindscape 68 | Melanie Mitchell on Artificial Intelligence and the Challenge of Common Sense

Listened to Mindscape 68 | Melanie Mitchell on Artificial Intelligence and the Challenge of Common Sense by Sean Carroll from preposterousuniverse.com

Artificial intelligence is better than humans at playing chess or go, but still has trouble holding a conversation or driving a car. A simple way to think about the discrepancy is through the lens of “common sense” — there are features of the world, from the fact that tables are solid to the prediction that a tree won’t walk across the street, that humans take for granted but that machines have difficulty learning. Melanie Mitchell is a computer scientist and complexity researcher who has written a new book about the prospects of modern AI. We talk about deep learning and other AI strategies, why they currently fall short at equipping computers with a functional “folk physics” understanding of the world, and how we might move forward.

Melanie Mitchell received her Ph.D. in computer science from the University of Michigan. She is currently a professor of computer science at Portland State University and an external professor at the Santa Fe Institute. Her research focuses on genetic algorithms, cellular automata, and analogical reasoning. She is the author of An Introduction to Genetic Algorithms, Complexity: A Guided Tour, and most recently Artificial Intelligence: A Guide for Thinking Humans. She originated the Santa Fe Institute’s Complexity Explorer project, on online learning resource for complex systems.

One of the more interesting interviews of Dr. Mitchell with respect to her excellent new book Dr. Carroll gets the space she’s working in and is able to have a more substantive conversation as a result.

👓 Humane Ingenuity 9: GPT-2 and You | Dan Cohen | Buttondown

Read Humane Ingenuity 9: GPT-2 and You by Dan CohenDan Cohen (buttondown.email)
This newsletter has not been written by a GPT-2 text generator, but you can now find a lot of artificially created text that has been.

For those not familiar with GPT-2, it is, according to its creators OpenAI (a socially conscious artificial intelligence lab overseen by a nonprofit entity), “a large-scale unsupervised language model which generates coherent paragraphs of text.” Think of it as a computer that has consumed so much text that it’s very good at figuring out which words are likely to follow other words, and when strung together, these words create fairly coherent sentences and paragraphs that are plausible continuations of any initial (or “seed”) text.

This isn’t a very difficult problem and the underpinnings of it are well laid out by John R. Pierce in *[An Introduction to Information Theory: Symbols, Signals and Noise](https://amzn.to/32JWDSn)*. In it he has a lot of interesting tidbits about language and structure from an engineering perspective including the reason why crossword puzzles work.
November 13, 2019 at 08:33AM

The most interesting examples have been the weird ones (cf. HI7), where the language model has been trained on narrower, more colorful sets of texts, and then sparked with creative prompts. Archaeologist Shawn Graham, who is working on a book I’d like to preorder right now, An Enchantment of Digital Archaeology: Raising the Dead with Agent Based Models, Archaeogaming, and Artificial Intelligence, fed GPT-2 the works of the English Egyptologist Flinders Petrie (1853-1942) and then resurrected him at the command line for a conversation about his work. Robin Sloan had similar good fun this summer with a focus on fantasy quests, and helpfully documented how he did it.

Circle back around and read this when it comes out.

Similarly, these other references should be an interesting read as well.
November 13, 2019 at 08:36AM

From this perspective, GPT-2 says less about artificial intelligence and more about how human intelligence is constantly looking for, and accepting of, stereotypical narrative genres, and how our mind always wants to make sense of any text it encounters, no matter how odd. Reflecting on that process can be the source of helpful self-awareness—about our past and present views and inclinations—and also, some significant enjoyment as our minds spin stories well beyond the thrown-together words on a page or screen.

And it’s not just happening with text, but it also happens with speech as I’ve written before: Complexity isn’t a Vice: 10 Word Answers and Doubletalk in Election 2016 In fact, in this mentioned case, looking at transcripts actually helps to reveal that the emperor had no clothes because there’s so much missing from the speech that the text doesn’t have enough space to fill in the gaps the way the live speech did.
November 13, 2019 at 08:43AM