Read What Happened to Tagging? by Alexandra SamuelAlexandra Samuel (JSTOR Daily)
Fourteen years ago, a dozen geeks gathered around our dining table for Tagsgiving dinner. No, that’s not a typo. In 2005, my husband and I celebrated Thanksgiving as “Tagsgiving,” in honor of the web technology that had given birth to our online community development shop. I invited our guests...

It almost sounds like Dr. Samuel could be looking for the IndieWeb community, but just hasn’t run across it yet. Since she’s writing about tags, I can’t help but mischievously snitch tagging it to her, though I’ll do so only in hopes that it might make the internet all the better for it.

Tagging systems were “folksonomies:” chaotic, self-organizing categorization schemes that grew from the bottom up.

There’s something that just feels so wrong in this article about old school tagging and the blogosphere that has a pullquote meant to encourage one to Tweet the quote.
–December 04, 2019 at 11:03AM

I literally couldn’t remember when I’d last looked at my RSS subscriptions.
On the surface, that might seem like a win: Instead of painstakingly curating my own incoming news, I can effortlessly find an endless supply of interesting, worthwhile content that the algorithm finds for me. The problem, of course, is that the algorithm isn’t neutral: It’s the embodiment of Facebook and Twitter’s technology, data analysis, and most crucial, business model. By relying on the algorithm, instead of on tags and RSS, I’m letting an army of web developers, business strategists, data scientists, and advertisers determine what gets my attention. I’m leaving myself vulnerable to misinformation, and manipulation, and giving up my power of self-determination.

–December 04, 2019 at 11:34AM

You might connect with someone who regularly used the same tags that you did, but that was because they shared your interests, not because they had X thousand followers.

An important and sadly underutilized means of discovery.
–December 04, 2019 at 11:35AM

I find it interesting that Alexandra’s Twitter display name is AlexandraSamuel.com while the top of her own website has the apparent title @AlexandraSamuel. I don’t think I’ve seen a crossing up of those two sorts of identities before though it has become more common for people to use their own website name as their Twitter name. Greg McVerry is another example of this.

Thanks to Jeremy Cherfas[1] and Aaron Davis[2] for the links to this piece. I suspect that Dr. Samuel will appreciate that we’re talking about this piece using our own websites and tagging them with our own crazy taxonomies. I’m feeling nostalgic now for the old Technorati…

Read What Happened to Tagging? by Aaron DavisAaron Davis (Read Write Collect)
Alexandra Samuel reflects on tagging and its origins as a backbone to the social web. Along with RSS, tags allowed users to connect and collate content using such tools as feed readers. This all changed with the advent of social media and the algorithmically curated news feed. Samuel wonders if we h...

Alexander Samuel reflects on tagging and its origins as a backbone to the social web. Along with RSS, tags allowed users to connect and collate content using such tools as feed readers. This all changed with the advent of social media and the algorithmically curated news feed.

Tags were used for discovery of specific types of content. Who needs that now that our new overlords of artificial intelligence and algorithmic feeds can tell us what we want to see?!

Of course we still need tags!!! How are you going to know serendipitously that you need more poetry in your life until you run into the tag on a service like IndieWeb.xyz? An algorithmic feed is unlikely to notice–or at least in my decade of living with them I’ve yet to run into poetry in one.
–December 04, 2019 at 10:56AM

Read The Evolving Exhibition of Us: A Decade of Sharing Pictures Online : Adjacent Issue 6 by Summer Bedard (itp.nyu.edu)
A deep examination and self-reflection on photo sharing of the last decade, Summer Bedard’s article looks at how the previously intimate, cumbersome experience has morphed into the edited, contrived perfection found on Instagram.

The explosion of people, marked a shift from having a community to having an audience. This ultimately changed the mental model of what gets posted. People act differently in their living room than they do on stage. They may feel more vulnerable and guarded. You’re sharing with a community, but working for an audience.

–November 28, 2019 at 09:42PM

I would love to see a future where enjoying photos becomes more like enjoying music. Spotify gives you an easy way to consider options by assessing your mood and putting together an appropriate playlist that feels personal. We could do the same for images. Can you imagine opening Spotify and having it blast a random song immediately? Our current Instagram home screen is the visual equivalent of a playlist mashup of country, classical, techno, hip hop, and polka. 

I like the idea of this. Can someone build it please?
–November 28, 2019 at 09:46PM

What if you could use AI to control the content in your feed? Dialing up or down whatever is most useful to you. If I’m on a budget, maybe I don’t want to see photos of friends on extravagant vacations. Or, if I’m trying to pay more attention to my health, encourage me with lots of salads and exercise photos. If I recently broke up with somebody, happy couple photos probably aren’t going to help in the healing process. Why can’t I have control over it all, without having to unfollow anyone. Or, opening endless accounts to separate feeds by topic. And if I want to risk seeing everything, or spend a week replacing my usual feed with images from a different culture, country, or belief system, couldn’t I do that, too? 

Some great blue sky ideas here.
–November 28, 2019 at 09:48PM

🎧 Mindscape 68 | Melanie Mitchell on Artificial Intelligence and the Challenge of Common Sense

Listened to Mindscape 68 | Melanie Mitchell on Artificial Intelligence and the Challenge of Common Sense by Sean Carroll from preposterousuniverse.com

Artificial intelligence is better than humans at playing chess or go, but still has trouble holding a conversation or driving a car. A simple way to think about the discrepancy is through the lens of “common sense” — there are features of the world, from the fact that tables are solid to the prediction that a tree won’t walk across the street, that humans take for granted but that machines have difficulty learning. Melanie Mitchell is a computer scientist and complexity researcher who has written a new book about the prospects of modern AI. We talk about deep learning and other AI strategies, why they currently fall short at equipping computers with a functional “folk physics” understanding of the world, and how we might move forward.

Melanie Mitchell received her Ph.D. in computer science from the University of Michigan. She is currently a professor of computer science at Portland State University and an external professor at the Santa Fe Institute. Her research focuses on genetic algorithms, cellular automata, and analogical reasoning. She is the author of An Introduction to Genetic Algorithms, Complexity: A Guided Tour, and most recently Artificial Intelligence: A Guide for Thinking Humans. She originated the Santa Fe Institute’s Complexity Explorer project, on online learning resource for complex systems.

One of the more interesting interviews of Dr. Mitchell with respect to her excellent new book Dr. Carroll gets the space she’s working in and is able to have a more substantive conversation as a result.

👓 Humane Ingenuity 9: GPT-2 and You | Dan Cohen | Buttondown

Read Humane Ingenuity 9: GPT-2 and You by Dan CohenDan Cohen (buttondown.email)
This newsletter has not been written by a GPT-2 text generator, but you can now find a lot of artificially created text that has been.

For those not familiar with GPT-2, it is, according to its creators OpenAI (a socially conscious artificial intelligence lab overseen by a nonprofit entity), “a large-scale unsupervised language model which generates coherent paragraphs of text.” Think of it as a computer that has consumed so much text that it’s very good at figuring out which words are likely to follow other words, and when strung together, these words create fairly coherent sentences and paragraphs that are plausible continuations of any initial (or “seed”) text.

This isn’t a very difficult problem and the underpinnings of it are well laid out by John R. Pierce in *[An Introduction to Information Theory: Symbols, Signals and Noise](https://amzn.to/32JWDSn)*. In it he has a lot of interesting tidbits about language and structure from an engineering perspective including the reason why crossword puzzles work.
November 13, 2019 at 08:33AM

The most interesting examples have been the weird ones (cf. HI7), where the language model has been trained on narrower, more colorful sets of texts, and then sparked with creative prompts. Archaeologist Shawn Graham, who is working on a book I’d like to preorder right now, An Enchantment of Digital Archaeology: Raising the Dead with Agent Based Models, Archaeogaming, and Artificial Intelligence, fed GPT-2 the works of the English Egyptologist Flinders Petrie (1853-1942) and then resurrected him at the command line for a conversation about his work. Robin Sloan had similar good fun this summer with a focus on fantasy quests, and helpfully documented how he did it.

Circle back around and read this when it comes out.

Similarly, these other references should be an interesting read as well.
November 13, 2019 at 08:36AM

From this perspective, GPT-2 says less about artificial intelligence and more about how human intelligence is constantly looking for, and accepting of, stereotypical narrative genres, and how our mind always wants to make sense of any text it encounters, no matter how odd. Reflecting on that process can be the source of helpful self-awareness—about our past and present views and inclinations—and also, some significant enjoyment as our minds spin stories well beyond the thrown-together words on a page or screen.

And it’s not just happening with text, but it also happens with speech as I’ve written before: Complexity isn’t a Vice: 10 Word Answers and Doubletalk in Election 2016 In fact, in this mentioned case, looking at transcripts actually helps to reveal that the emperor had no clothes because there’s so much missing from the speech that the text doesn’t have enough space to fill in the gaps the way the live speech did.
November 13, 2019 at 08:43AM

🔖 GLTR: Statistical Detection and Visualization of Generated Text | Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations

Bookmarked GLTR: Statistical Detection and Visualization of Generated Text by Sebastian Gehrmann (Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: System Demonstrations (aclweb.org) [.pdf])

The rapid improvement of language models has raised the specter of abuse of text generation systems. This progress motivates the development of simple methods for detecting generated text that can be used by and explained to non-experts. We develop GLTR, a tool to support humans in detecting whether a text was generated by a model. GLTR applies a suite of baseline statistical methods that can detect generation artifacts across common sampling schemes. In a human-subjects study, we show that the annotation scheme provided by GLTR improves the human detection-rate of fake text from 54% to 72% without any prior training. GLTR is open-source and publicly deployed, and has already been widely used to detect generated outputs.

From pages 111–116; Florence, Italy, July 28 - August 2, 2019. Association for Computational Linguistics

🔖 Notes from the quest factory | Robin Sloan

Bookmarked Notes from the quest factory by Robin Sloan (Year of the Meteor)
Tools and techniques related to AI text generation. I wrote this for like twelve people. Recently, I used an AI trained on fantasy novels to generate custom stories for about a thousand readers. The stories were appealingly strange, they came with maps (MAPS!), and they looked like this:

🔖 The Resurrection of Flinders Petrie | electricarchaeology.ca

Bookmarked The Resurrection of Flinders Petrie by Shawn Graham (electricarchaeology.ca)
The following is an extended excerpt from my book-in-progress, “An Enchantment of Digital Archaeology: Raising the Dead with Agent Based Models, Archaeogaming, and Artificial Intelligence”, which is under contract with Berghahn Books, New York, and is to see the light of day in the summer of 2020. I welcome your thoughts. The final form of this section will no doubt change by the time I get through the entire process. I use the term ‘golems’ earlier in the book to describe the agents of agent based modeling, which I then translate into archaeogames, which then I muse might be powered by neural network models of language like GPT-2.

🎧 Triangulation 413 David Weinberger: Everyday Chaos | TWiT.TV

Listened to Triangulation 413 David Weinberger: Everyday Chaos from TWiT.tv

Mikah Sargent speaks with David Weinberger, author of Everyday Chaos: Technology, Complexity, and How We’re Thriving in a New World of Possibility about how AI, big data, and the internet are all revealing that the world is vastly more complex and unpredictable than we've allowed ourselves to see and how we're getting acculturated to these machines based on chaos.

Interesting discussion of systems with built in openness or flexibility as a feature. They highlight Slack which has a core product, but allows individual users and companies to add custom pieces to it to use in the way they want. This provides a tremendous amount of addition value that Slack would never have known or been able to build otherwise. These sorts of products or platforms have the ability not only to create their inherent links, but add value by being able to flexibly create additional links outside of themselves or let external pieces create links to them.

Twitter started out like this in some sense, but ultimately closed itself off–likely to its own detriment.

Watched A bold idea to replace politicians by César Hidalgo from ted.com
César Hidalgo has a radical suggestion for fixing our broken political system: automate it! In this provocative talk, he outlines a bold idea to bypass politicians by empowering citizens to create personalized AI representatives that participate directly in democratic decisions. Explore a new way to make collective decisions and expand your understanding of democracy.

“It’s not a communication problem, it’s a cognitive bandwidth problem.”—César Hidalgo

He’s definitely right about the second part, but it’s also a communication problem because most of political speech is nuanced toward the side of untruths and covering up facts and potential outcomes to represent the outcome the speaker wants. There’s also far too much of our leaders saying “Do as I say (and attempt to legislate) and not as I do.” Examples include things like legislators working to actively take away things like abortion or condemn those who are LGBTQ when they actively do those things for themselves or their families or live out those lifestyles in secret.

“One of the reasons why we use Democracy so little may be because Democracy has a very bad user interface and if we improve the user interface of democracy we might be able to use it more.”—César Hidalgo

This is an interesting idea, but definitely has many pitfalls with respect to how we know AI systems currently work. We’d definitely need to start small with simpler problems and build our way up to the more complex. However, even then, I’m not so sure that the complexity issues could ultimately be overcome. On it’s face it sounds like he’s relying too much on the old “clockwork” viewpoint of phyiscs, though I know that obviously isn’t (or couldn’t be) his personal viewpoint. There’s a lot more pathways for this to become a weapon of math destruction currently than the utopian tool he’s envisioning.

🎧 Stephen Fry On How Our Myths Help Us Know Who We Are | Clear+Vivid with Alan Alda

Listened to Stephen Fry On How Our Myths Help Us Know Who We Are by Alan Alda from Clear+Vivid with Alan Alda

Stephen Fry loves words. But he does more than love them. He puts them together in ways that so delight readers, that a blog or a tweet by him can get hundreds of thousands of people hanging on his every keystroke. As an actor, he’s brought to life every kind of theatrical writing from sketch comedy to classics. He’s performed in everything from game shows to the British audiobook version of Harry Potter. And always with a rich intelligence and searching eye. In this conversation with Alan Alda, Stephen explores how myths — sometimes very ancient ones — help us understand and, even guide, our modern selves.

Just a lovely episode here. I particularly like the idea about looking back to Greek mythology and the issues between the gods and humans being overlain in parallel on our present and future issues between humans and computers/robots/artificial intelligence.

👓 Deep text: a catastrophic threat to the bullshit economy? | Abject

Read Deep text: a catastrophic threat to the bullshit economy? (Abject)
I used to be an artist, then I became a poet; then a writer. Now when asked, I simply refer to myself as a word processor. — Kenneth Goldsmith It’s a striking headline, and the Guardian…

📑 Walter Pitts by Neil Smalheiser | Journal Perspectives in Biology and Medicine

Bookmarked Walter Pitts by Neil SmalheiserNeil Smalheiser (Journal Perspectives in Biology and Medicine. Volume 43. Issue 2. Page 217 - 226.)
Walter Pitts was pivotal in establishing the revolutionary notion of the brain as a computer, which was seminal in the development of computer design, cybernetics, artificial intelligence, and theoretical neuroscience. He was also a participant in a large number of key advances in 20th-century science.  

This looks like an interesting bio to read.

🎧 Triangulation 380 The Age of Surveillance Capitalism | TWiT.TV

Listened to Triangulation 380 The Age of Surveillance Capitalism by Leo Laporte from TWiT.tv

Shoshana Zuboff is the author of The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. She talks with Leo Laporte about how social media is being used to influence people.

Links

Even for the people who are steeped in some of the ideas of surveillance capitalism, ad tech, and dark patterns, there’s a lot here to still be surprised about. If you’re on social media, this should be required listening/watching.

I can’t wait to get the copy of her book.

Folks in the IndieWeb movement have begun to fix portions of the problem, but Shoshana Zuboff indicates that there are several additional levels of humane understanding that will need to be bridged to make sure their efforts aren’t just in vain. We’ll likely need to do more than just own our own data, but we’ll need to go a step or two further as well.

The thing I was shocked to not hear in this interview (and which may not be in the book either) is something that I think has been generally left unmentioned with respect to Facebook and elections and election tampering (29:18). Zuboff and Laporte discuss Facebook’s experiments in influencing people to vote in several tests for which they published academic papers. Even with the rumors that Mark Zuckerberg was eyeing a potential presidential run in 2020 with his trip across America and meeting people of all walks of life, no one floated the general idea that as the CEO of Facebook, he might use what they learned in those social experiments to help get himself (or even someone else) elected by sending social signals to certain communities to prevent them from voting while sending other signals to other communities to encourage them to vote. The research indicates that in a very divided political climate that with the right sorts of voting data, it wouldn’t take a whole lot of work for Facebook to help effectuate a landslide victory for particular candidates or even entire political parties!! And of course because of the distributed nature of such an attack on democracy, Facebook’s black box algorithms, and the subtlety of the experiments, it would be incredibly hard to prove that such a thing was even done.

I like her broad concept (around 43:00) where she discusses the idea of how people tend to frame new situations using pre-existing experience and that this may not always be the most useful thing to do for what can be complex ideas that don’t or won’t necessarily play out the same way given the potential massive shifts in paradigms.

Also of great interest is the idea of instrumentarianism as opposed to the older ideas of totalitarianism. (43:49) Totalitarian leaders used to rule by fear and intimidation and now big data stores can potentially create these same types of dynamics, but without the need for the fear and intimidation by more subtly influencing particular groups of people. When combined with the ideas behind “swarming” phenomenon or Mark Granovetter’s ideas of threshold reactions in psychology, only a very small number of people may need to be influenced digitally to create drastic outcomes. I don’t recall the reference specifically, but I recall a paper about the mathematics with respect to creating ethnic neighborhoods that only about 17% of people needed to be racists and move out of a neighborhood to begin to create ethnic homogeneity and drastically less diversity within a community.

Also tangentially touched on here, but not discussed directly, I can’t help but think that all of this data with some useful complexity theory might actually go a long way toward better defining (and being able to actually control) Adam Smith’s economic “invisible hand.”

There’s just so much to consider here that it’s going to take several revisits to the ideas and some additional research to tease this all apart.

🎧 Triangulation 383 Meredith Broussard: Artificial Unintelligence | TWiT.TV

Listened to Triangulation 383 Meredith Broussard: Artificial Unintelligence by Megan MorroneMegan Morrone from TWiT.tv

Software developer and data journalist Meredith Broussard joins Megan Morrone to discuss her book Artificial Unintelligence: How Computers Misunderstand the World, which makes the case against the idea that technology can solve all our problems, touching on self-driving cars, the digital divide, the difference between AI and machine learning, and more.

I’ve been waiting a while for Meredith’s book Artificial Unintelligence: How Computers Misunderstand the World to come out and this is an excellent reminder to pick up several copies for some friends who I know will appreciate it.

I’m curious if she’s got an Amazon Associates referral link so that we can give her an extra ~4% back for promoting her book? I don’t see one on her website unfortunately.

The opening of the show recalling the internet in the 90’s definitely took me back as I remember being in at least one class in college with Megan Morrone. I seem to recall that it was something in Writing Seminars, perhaps Contemporary American Letters?

There’s so much good to highlight here, but in particular I like the concept of technochauvinism, thought when I initially heard it I had a different conception of what it might be instead of the definition that Broussard gives as the belief that technology is always the solution to every problem. My initial impression of it was something closer to the idea of tech bro.

My other favorite piece of discussion centered on her delving into her local educational structure to find that there was a dearth of books and computers and how some of that might be fixed for future children. It’s reminiscent of a local computer scientist I know from Cal Tech who created some bus route models for the Pasadena school system to minimize their travel, gas cost, and personnel to save the district several million dollars. I’m hoping some of those savings go toward more books…