📑 YouTube Executives Ignored Warnings, Letting Toxic Videos Run Rampant

Annotated YouTube Executives Ignored Warnings, Letting Toxic Videos Run Rampant by Mark Bergen (Bloomberg)
A 2015 clip about vaccination from iHealthTube.com, a “natural health” YouTube channel, is one of the videos that now sports a small gray box.  

Does this box appear on the video itself? Apparently not…

Examples:

But nothing on the embedded versions:

A screengrab of what this looks like for those that don’t want to be “infected” by the algorithmic act of visiting these YouTube pages:
YouTube video on vaccinations with a gray block below it giving a link to some factual information on Wikipedia

🎧 How President Trump’s Angry Tweets Can Ripple Across Social Media | NPR

Listened to How President Trump's Angry Tweets Can Ripple Across Social Media by Tim MakTim Mak from NPR

When Trump posts a mean tweet, how does it make its way across social media into the American consciousness? Researchers crunched the numbers to see if his negative tweets were shared more often.

This news segment references some interesting sounding research groups who are working in social media, emotional contagion, and sentiment analysis.

📑 YouTube Executives Ignored Warnings, Letting Toxic Videos Run Rampant

Annotated YouTube Executives Ignored Warnings, Letting Toxic Videos Run Rampant by Mark Bergen (Bloomberg)
YouTube doesn’t give an exact recipe for virality. But in the race to one billion hours, a formula emerged: Outrage equals attention.  

Talk radio has had this formula for years and they’ve almost had to use it to drive any listenership as people left radio for television and other media.

I can still remember the different “loudness” level of talk between Bill O’Reilly’s primetime show on Fox News and the louder level on his radio show.

📑 YouTube Executives Ignored Warnings, Letting Toxic Videos Run Rampant

Annotated YouTube Executives Ignored Warnings, Letting Toxic Videos Run Rampant by Mark Bergen (Bloomberg)
When Wojcicki took over, in 2014, YouTube was a third of the way to the goal, she recalled in investor John Doerr’s 2018 book Measure What Matters.“They thought it would break the internet! But it seemed to me that such a clear and measurable objective would energize people, and I cheered them on,” Wojcicki told Doerr. “The billion hours of daily watch time gave our tech people a North Star.” By October, 2016, YouTube hit its goal.  

Obviously they took the easy route. You may need to measure what matters, but getting to that goal by any means necessary or using indefensible shortcuts is the fallacy here. They could have had that North Star, but it’s the means they used by which to reach it that were wrong.

This is another great example of tech ignoring basic ethics to get to a monetary goal. (Another good one is Marc Zuckerberg’s “connecting people” mantra when what he should be is “connecting people for good” or “creating positive connections”.

📑 YouTube Executives Ignored Warnings, Letting Toxic Videos Run Rampant

Annotated YouTube Executives Ignored Warnings, Letting Toxic Videos Run Rampant by Mark Bergen (Bloomberg)
Somewhere along the last decade, he added, YouTube prioritized chasing profits over the safety of its users. “We may have been hemorrhaging money,” he said. “But at least dogs riding skateboards never killed anyone.”  

📑 YouTube Executives Ignored Warnings, Letting Toxic Videos Run Rampant

Annotated YouTube Executives Ignored Warnings, Letting Toxic Videos Run Rampant by Mark Bergen (Bloomberg)
The conundrum isn’t just that videos questioning the moon landing or the efficacy of vaccines are on YouTube. The massive “library,” generated by users with little editorial oversight, is bound to have untrue nonsense. Instead, YouTube’s problem is that it allows the nonsense to flourish. And, in some cases, through its powerful artificial intelligence system, it even provides the fuel that lets it spread.  

This is a great summation of the issue.

👓 Amazon Workers Are Listening to What You Tell Alexa | Bloomberg

Read Amazon Workers Are Listening to What You Tell Alexa by Matt Day , Giles Turner , and Natalia Drozdiak (Bloomberg)
A global team reviews audio clips in an effort to help the voice-activated assistant respond to commands.

🎧 The Daily: Silicon Valley’s Military Dilemma | New York Times

Listened to The Daily: Silicon Valley’s Military Dilemma from New York Times

Should Big Tech partner with the Pentagon? We examine a cautionary tale.

Some great history and questions about ethics here.

I’m surprised that for it’s share of profits that Down didn’t spin off the napalm division to some defense contractor?

Of course some tech companies are already weaponizing their own products against people. What about those ethical issues.

The more I seeread, and hear about the vagaries of social media; the constant failings and foibles of Facebook, the trolling dumpster fire that is Twitter, the ills of Instagram; the spread of dark patterns; and the unchecked rise of surveillance capitalism, and weapons of math destruction the more I think that the underlying ideas focusing on people and humanity within the IndieWeb movement are its core strength.

Perhaps we need to create a new renaissance of humanism for the 21st century? Maybe we call it digital humanism to create some intense focus, but the emphasis should be completely on the people side.

Naturally there’s a lot more that they–and we all–need to do to improve our lives. Let’s band together to create better people-centric, ethical solutions.

🎧 Triangulation 380 The Age of Surveillance Capitalism | TWiT.TV

Listened to Triangulation 380 The Age of Surveillance Capitalism by Leo Laporte from TWiT.tv

Shoshana Zuboff is the author of The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. She talks with Leo Laporte about how social media is being used to influence people.

Links

Even for the people who are steeped in some of the ideas of surveillance capitalism, ad tech, and dark patterns, there’s a lot here to still be surprised about. If you’re on social media, this should be required listening/watching.

I can’t wait to get the copy of her book.

Folks in the IndieWeb movement have begun to fix portions of the problem, but Shoshana Zuboff indicates that there are several additional levels of humane understanding that will need to be bridged to make sure their efforts aren’t just in vain. We’ll likely need to do more than just own our own data, but we’ll need to go a step or two further as well.

The thing I was shocked to not hear in this interview (and which may not be in the book either) is something that I think has been generally left unmentioned with respect to Facebook and elections and election tampering (29:18). Zuboff and Laporte discuss Facebook’s experiments in influencing people to vote in several tests for which they published academic papers. Even with the rumors that Mark Zuckerberg was eyeing a potential presidential run in 2020 with his trip across America and meeting people of all walks of life, no one floated the general idea that as the CEO of Facebook, he might use what they learned in those social experiments to help get himself (or even someone else) elected by sending social signals to certain communities to prevent them from voting while sending other signals to other communities to encourage them to vote. The research indicates that in a very divided political climate that with the right sorts of voting data, it wouldn’t take a whole lot of work for Facebook to help effectuate a landslide victory for particular candidates or even entire political parties!! And of course because of the distributed nature of such an attack on democracy, Facebook’s black box algorithms, and the subtlety of the experiments, it would be incredibly hard to prove that such a thing was even done.

I like her broad concept (around 43:00) where she discusses the idea of how people tend to frame new situations using pre-existing experience and that this may not always be the most useful thing to do for what can be complex ideas that don’t or won’t necessarily play out the same way given the potential massive shifts in paradigms.

Also of great interest is the idea of instrumentarianism as opposed to the older ideas of totalitarianism. (43:49) Totalitarian leaders used to rule by fear and intimidation and now big data stores can potentially create these same types of dynamics, but without the need for the fear and intimidation by more subtly influencing particular groups of people. When combined with the ideas behind “swarming” phenomenon or Mark Granovetter’s ideas of threshold reactions in psychology, only a very small number of people may need to be influenced digitally to create drastic outcomes. I don’t recall the reference specifically, but I recall a paper about the mathematics with respect to creating ethnic neighborhoods that only about 17% of people needed to be racists and move out of a neighborhood to begin to create ethnic homogeneity and drastically less diversity within a community.

Also tangentially touched on here, but not discussed directly, I can’t help but think that all of this data with some useful complexity theory might actually go a long way toward better defining (and being able to actually control) Adam Smith’s economic “invisible hand.”

There’s just so much to consider here that it’s going to take several revisits to the ideas and some additional research to tease this all apart.

👓 Instacart and DoorDash’s Tip Policies Are Delivering Outrage | The New York Times

Read After Uproar, Instacart Backs Off Controversial Tipping Policy (New York Times)
The delivery app’s practice of counting tips toward guaranteed minimum payments for its contract workers drew accusations of wage theft.

👓 How Math Can Be Racist: Giraffing | 0xabad1dea

Read How Math Can Be Racist: Giraffing (0xabad1dea)
Well, any computer scientist or experienced programmer knows right away that being “made of math” does not demonstrate anything about the accuracy or utility of a program. Math is a lot more of a social construct than most people think. But we don’t need to spend years taking classes in algorithms to understand how and why the types of algorithms used in artificial intelligence systems today can be tremendously biased. Here, look at these four photos. What do they have in common?

👓 Is YouTube Fundamental or Trivial? | Study Hacks – Cal Newport

Replied to Is YouTube Fundamental or Trivial? by Cal Newport (Study Hacks)

As a public critic of social media, I’m often asked if my concerns extend to YouTube. This is a tricky question.

As I’ve written, platforms such as Facebook and Instagram didn’t offer something fundamentally different than the world wide web that preceded them. Their main contribution was to make this style of online life more accessible and convenient.

I suspect that people have generally been exploring some of this already, particularly with embedding. The difficult part on moving past YouTube, Vimeo, et al. with streaming or even simple embedding is that video on the web is a big engineering problem not to mention a major bandwidth issue for self-hosters. I’ve seen scions like Kevin Marks indicate in the past that they’d put almost any type of content on their own websites natively but video. Even coding a JavaScript player on one’s site is prohibitively difficult and rarely do major corporate players in the video content space bother to do this themselves. Thus, until something drastic happens, embedding video may be the only sensible way to go.

As an interesting aside, I’ll note that just a few months ago that YouTube allowed people to do embeds with several options, but they’re recently removed the option to prevent their player from recommending additional videos once you’re done. Thus the embedding site is still co-opted to some extent by YouTube and their vexing algorithmic recommendations.

In a similar vein audio is also an issue, but at least an easier and much lower bandwidth one. I’ve been running some experiments lately on my own website by posting what I’m listening to on a regular basis as a “faux-cast” and embedding the original audio. I’ve also been doing it pointedly as a means of helping others discover good content, because in some sense I can say I love the most recent NPR podcast or click like on it somewhere, but I’m definitely sure that doesn’t have as much weight or value as my tacitly saying, “I’ve actually put my time and attention on the line and actually listened to this particular episode.” I think having and indicating skin-in-the-game can make a tremendous difference in these areas. In a similar vein, sites like Twitter don’t really have a good bookmarking feature, so readers don’t know if the sharing user actually read any of an article or if it was just the headline. Posting these things separately on my own site as either reads or bookmarks allows me to differentiate between the two specifically and semantically, both for others’ benefit as well as, and possibly most importantly, for my own (future self).

👓 I Tried Predictim's AI Scan for 'Risky' Babysitters on People I Trust | Gizmodo

Read I Tried Predictim's AI Scan for 'Risky' Babysitters on People I Trust (Gizmodo)
The founders of Predictim want to be clear with me: Their product—an algorithm that scans the online footprint of a prospective babysitter to determine their “risk” levels for parents—is not racist. It is not biased.

Another example of an app saying “We don’t have bias in our AI” when it seems patently true that they do. I wonder how one would prove (mathematically) that one didn’t have bias?