Cross Campus, 85 N. Raymond Avenue, Pasadena, CA, US
January 17, 2020 at 08:15AM- January 17, 2020 at 10:15AM
Artificial Intelligence has the potential to enhance life in ways that we are just beginning to explore. But along with its advantages comes new challenges for ethical system behavior. We must work together to mitigate the risks associated with AI solutions.
Bio: Maria Alvarez
As the General Manager of Shared Engineering Services in the AI + Research division at Microsoft, Maria and her team provide services and programs that support Search, Ads, News, Maps, and Microsoft Research. Maria is a technical leader with over 20 years of experience. Prior to joining Microsoft in 2011, she was promoted through positions at Symantec, HP, CoCreate Software, and Yahoo! She also served as CTO of Panda Security in Spain. Maria has a B.S. in Information Systems and a M.S. in CS from California State Polytechnic University.
Melanie Mitchell & Jim talk about the many approaches to creating AI, hype cycles, self-driving cars, what can be learned from human intelligence, & more!
Fourteen years ago, a dozen geeks gathered around our dining table for Tagsgiving dinner. No, that’s not a typo. In 2005, my husband and I celebrated Thanksgiving as “Tagsgiving,” in honor of the web technology that had given birth to our online community development shop. I invited our guests...
Tagging systems were “folksonomies:” chaotic, self-organizing categorization schemes that grew from the bottom up. ❧
There’s something that just feels so wrong in this article about old school tagging and the blogosphere that has a pullquote meant to encourage one to Tweet the quote. #irony
–December 04, 2019 at 11:03AM
I literally couldn’t remember when I’d last looked at my RSS subscriptions.
On the surface, that might seem like a win: Instead of painstakingly curating my own incoming news, I can effortlessly find an endless supply of interesting, worthwhile content that the algorithm finds for me. The problem, of course, is that the algorithm isn’t neutral: It’s the embodiment of Facebook and Twitter’s technology, data analysis, and most crucial, business model. By relying on the algorithm, instead of on tags and RSS, I’m letting an army of web developers, business strategists, data scientists, and advertisers determine what gets my attention. I’m leaving myself vulnerable to misinformation, and manipulation, and giving up my power of self-determination. ❧
–December 04, 2019 at 11:34AM
You might connect with someone who regularly used the same tags that you did, but that was because they shared your interests, not because they had X thousand followers. ❧
An important and sadly underutilized means of discovery. –December 04, 2019 at 11:35AM
I find it interesting that Alexandra’s Twitter display name is AlexandraSamuel.com while the top of her own website has the apparent title @AlexandraSamuel. I don’t think I’ve seen a crossing up of those two sorts of identities before though it has become more common for people to use their own website name as their Twitter name. Greg McVerry is another example of this.
Thanks to Jeremy Cherfas and Aaron Davis for the links to this piece. I suspect that Dr. Samuel will appreciate that we’re talking about this piece using our own websites and tagging them with our own crazy taxonomies. I’m feeling nostalgic now for the old Technorati…
Alexandra Samuel reflects on tagging and its origins as a backbone to the social web. Along with RSS, tags allowed users to connect and collate content using such tools as feed readers. This all changed with the advent of social media and the algorithmically curated news feed. Samuel wonders if we h...
Alexander Samuel reflects on tagging and its origins as a backbone to the social web. Along with RSS, tags allowed users to connect and collate content using such tools as feed readers. This all changed with the advent of social media and the algorithmically curated news feed. ❧
Tags were used for discovery of specific types of content. Who needs that now that our new overlords of artificial intelligence and algorithmic feeds can tell us what we want to see?!
Of course we still need tags!!! How are you going to know serendipitously that you need more poetry in your life until you run into the tag on a service like IndieWeb.xyz? An algorithmic feed is unlikely to notice–or at least in my decade of living with them I’ve yet to run into poetry in one.
–December 04, 2019 at 10:56AM
A deep examination and self-reflection on photo sharing of the last decade, Summer Bedard’s article looks at how the previously intimate, cumbersome experience has morphed into the edited, contrived perfection found on Instagram.
The explosion of people, marked a shift from having a community to having an audience. This ultimately changed the mental model of what gets posted. People act differently in their living room than they do on stage. They may feel more vulnerable and guarded. You’re sharing with a community, but working for an audience. ❧
–November 28, 2019 at 09:42PM
I would love to see a future where enjoying photos becomes more like enjoying music. Spotify gives you an easy way to consider options by assessing your mood and putting together an appropriate playlist that feels personal. We could do the same for images. Can you imagine opening Spotify and having it blast a random song immediately? Our current Instagram home screen is the visual equivalent of a playlist mashup of country, classical, techno, hip hop, and polka. ❧
I like the idea of this. Can someone build it please?
–November 28, 2019 at 09:46PM
What if you could use AI to control the content in your feed? Dialing up or down whatever is most useful to you. If I’m on a budget, maybe I don’t want to see photos of friends on extravagant vacations. Or, if I’m trying to pay more attention to my health, encourage me with lots of salads and exercise photos. If I recently broke up with somebody, happy couple photos probably aren’t going to help in the healing process. Why can’t I have control over it all, without having to unfollow anyone. Or, opening endless accounts to separate feeds by topic. And if I want to risk seeing everything, or spend a week replacing my usual feed with images from a different culture, country, or belief system, couldn’t I do that, too? ❧
Some great blue sky ideas here.
–November 28, 2019 at 09:48PM
Artificial intelligence is better than humans at playing chess or go, but still has trouble holding a conversation or driving a car. A simple way to think about the discrepancy is through the lens of “common sense” — there are features of the world, from the fact that tables are solid to the prediction that a tree won’t walk across the street, that humans take for granted but that machines have difficulty learning. Melanie Mitchell is a computer scientist and complexity researcher who has written a new book about the prospects of modern AI. We talk about deep learning and other AI strategies, why they currently fall short at equipping computers with a functional “folk physics” understanding of the world, and how we might move forward.
Melanie Mitchell received her Ph.D. in computer science from the University of Michigan. She is currently a professor of computer science at Portland State University and an external professor at the Santa Fe Institute. Her research focuses on genetic algorithms, cellular automata, and analogical reasoning. She is the author of An Introduction to Genetic Algorithms, Complexity: A Guided Tour, and most recently Artificial Intelligence: A Guide for Thinking Humans. She originated the Santa Fe Institute’s Complexity Explorer project, on online learning resource for complex systems.
This newsletter has not been written by a GPT-2 text generator, but you can now find a lot of artificially created text that has been.
For those not familiar with GPT-2, it is, according to its creators OpenAI (a socially conscious artificial intelligence lab overseen by a nonprofit entity), “a large-scale unsupervised language model which generates coherent paragraphs of text.” Think of it as a computer that has consumed so much text that it’s very good at figuring out which words are likely to follow other words, and when strung together, these words create fairly coherent sentences and paragraphs that are plausible continuations of any initial (or “seed”) text.
This isn’t a very difficult problem and the underpinnings of it are well laid out by John R. Pierce in *[An Introduction to Information Theory: Symbols, Signals and Noise](https://amzn.to/32JWDSn)*. In it he has a lot of interesting tidbits about language and structure from an engineering perspective including the reason why crossword puzzles work.
November 13, 2019 at 08:33AM
The most interesting examples have been the weird ones (cf. HI7), where the language model has been trained on narrower, more colorful sets of texts, and then sparked with creative prompts. Archaeologist Shawn Graham, who is working on a book I’d like to preorder right now, An Enchantment of Digital Archaeology: Raising the Dead with Agent Based Models, Archaeogaming, and Artificial Intelligence, fed GPT-2 the works of the English Egyptologist Flinders Petrie (1853-1942) and then resurrected him at the command line for a conversation about his work. Robin Sloan had similar good fun this summer with a focus on fantasy quests, and helpfully documented how he did it.
Circle back around and read this when it comes out.
Similarly, these other references should be an interesting read as well.
November 13, 2019 at 08:36AM
From this perspective, GPT-2 says less about artificial intelligence and more about how human intelligence is constantly looking for, and accepting of, stereotypical narrative genres, and how our mind always wants to make sense of any text it encounters, no matter how odd. Reflecting on that process can be the source of helpful self-awareness—about our past and present views and inclinations—and also, some significant enjoyment as our minds spin stories well beyond the thrown-together words on a page or screen.
And it’s not just happening with text, but it also happens with speech as I’ve written before: Complexity isn’t a Vice: 10 Word Answers and Doubletalk in Election 2016 In fact, in this mentioned case, looking at transcripts actually helps to reveal that the emperor had no clothes because there’s so much missing from the speech that the text doesn’t have enough space to fill in the gaps the way the live speech did.
November 13, 2019 at 08:43AM
The rapid improvement of language models has raised the specter of abuse of text generation systems. This progress motivates the development of simple methods for detecting generated text that can be used by and explained to non-experts. We develop GLTR, a tool to support humans in detecting whether a text was generated by a model. GLTR applies a suite of baseline statistical methods that can detect generation artifacts across common sampling schemes. In a human-subjects study, we show that the annotation scheme provided by GLTR improves the human detection-rate of fake text from 54% to 72% without any prior training. GLTR is open-source and publicly deployed, and has already been widely used to detect generated outputs.
From pages 111–116; Florence, Italy, July 28 - August 2, 2019. Association for Computational Linguistics
Tools and techniques related to AI text generation. I wrote this for like twelve people. Recently, I used an AI trained on fantasy novels to generate custom stories for about a thousand readers. The stories were appealingly strange, they came with maps (MAPS!), and they looked like this:
The following is an extended excerpt from my book-in-progress, “An Enchantment of Digital Archaeology: Raising the Dead with Agent Based Models, Archaeogaming, and Artificial Intelligence”, which is under contract with Berghahn Books, New York, and is to see the light of day in the summer of 2020. I welcome your thoughts. The final form of this section will no doubt change by the time I get through the entire process. I use the term ‘golems’ earlier in the book to describe the agents of agent based modeling, which I then translate into archaeogames, which then I muse might be powered by neural network models of language like GPT-2.
Mikah Sargent speaks with David Weinberger, author of Everyday Chaos: Technology, Complexity, and How We’re Thriving in a New World of Possibility about how AI, big data, and the internet are all revealing that the world is vastly more complex and unpredictable than we've allowed ourselves to see and how we're getting acculturated to these machines based on chaos.
Twitter started out like this in some sense, but ultimately closed itself off–likely to its own detriment.
César Hidalgo has a radical suggestion for fixing our broken political system: automate it! In this provocative talk, he outlines a bold idea to bypass politicians by empowering citizens to create personalized AI representatives that participate directly in democratic decisions. Explore a new way to make collective decisions and expand your understanding of democracy.
“It’s not a communication problem, it’s a cognitive bandwidth problem.”—César Hidalgo
He’s definitely right about the second part, but it’s also a communication problem because most of political speech is nuanced toward the side of untruths and covering up facts and potential outcomes to represent the outcome the speaker wants. There’s also far too much of our leaders saying “Do as I say (and attempt to legislate) and not as I do.” Examples include things like legislators working to actively take away things like abortion or condemn those who are LGBTQ when they actively do those things for themselves or their families or live out those lifestyles in secret.
“One of the reasons why we use Democracy so little may be because Democracy has a very bad user interface and if we improve the user interface of democracy we might be able to use it more.”—César Hidalgo
This is an interesting idea, but definitely has many pitfalls with respect to how we know AI systems currently work. We’d definitely need to start small with simpler problems and build our way up to the more complex. However, even then, I’m not so sure that the complexity issues could ultimately be overcome. On it’s face it sounds like he’s relying too much on the old “clockwork” viewpoint of phyiscs, though I know that obviously isn’t (or couldn’t be) his personal viewpoint. There’s a lot more pathways for this to become a weapon of math destruction currently than the utopian tool he’s envisioning.
Stephen Fry loves words. But he does more than love them. He puts them together in ways that so delight readers, that a blog or a tweet by him can get hundreds of thousands of people hanging on his every keystroke. As an actor, he’s brought to life every kind of theatrical writing from sketch comedy to classics. He’s performed in everything from game shows to the British audiobook version of Harry Potter. And always with a rich intelligence and searching eye. In this conversation with Alan Alda, Stephen explores how myths — sometimes very ancient ones — help us understand and, even guide, our modern selves.
I used to be an artist, then I became a poet; then a writer. Now when asked, I simply refer to myself as a word processor. — Kenneth Goldsmith It’s a striking headline, and the Guardian…
Walter Pitts was pivotal in establishing the revolutionary notion of the brain as a computer, which was seminal in the development of computer design, cybernetics, artificial intelligence, and theoretical neuroscience. He was also a participant in a large number of key advances in 20th-century science. ❧