A global team reviews audio clips in an effort to help the voice-activated assistant respond to commands.
Well, any computer scientist or experienced programmer knows right away that being “made of math” does not demonstrate anything about the accuracy or utility of a program. Math is a lot more of a social construct than most people think. But we don’t need to spend years taking classes in algorithms to understand how and why the types of algorithms used in artificial intelligence systems today can be tremendously biased. Here, look at these four photos. What do they have in common?
I have a problem with algorithms that sort my posts by parameters I don’t know about, made by people who want to sell my attention to others. ❧
This is a key point. Social media and the way it (and its black box algorithms) amplifies almost anything for the benefit of clicks towards advertising is one of its most toxic features. Too often the extreme voice draws the most attention instead of being moderated down by more civil and moderate society.
I’ve spent some time this morning thinking about the deplatforming of the abhorrent social media site Gab.ai by Google, Apple, Stripe, PayPal, and Medium following the Tree of Life shooting in Pennsylvania. I’ve created a deplatforming page on the IndieWeb wiki with some initial background and history. I’ve also gone back and tagged (with “deplatforming”) a few articles I’ve read or podcasts I’ve listened to recently that may have some interesting bearing on the topic.
The particular design question I’m personally looking at is roughly:
How can we reshape the web and social media in a way that allows individuals and organizations a platform for their own free speech and communication without accelerating or amplifying the voices of the abhorrent fringes of people espousing broadly anti-social values like virulent discrimination, racism, fascism, etc.?
In some sense, the advertising driven social media sites like Facebook, Twitter, et al. have given the masses the equivalent of not simply a louder voice within their communities, but potential megaphones to audiences previously far, far beyond their reach. When monetized against the tremendous value of billions of clicks, there is almost no reason for these corporate giants to filter or moderate socially abhorrent content. Their unfiltered and unregulated algorithms compound the issue from a societal perspective. I look at it in some sense as the equivalent of the advent of machine guns and ultimately nuclear weapons in 20th century warfare and their extreme effects on modern society.
The flip side of the coin is also potentially to allow users the ability to better control and/or filter out what they’re presented on platforms and thus consuming, so solutions can relate to both the output as well as the input stages.
Comments and additions to the page (or even here below) particularly with respect to positive framing and potential solutions on how to best approach this design hurdle for human communication are more than welcome.
Deplatforming or no platform is a form of banning in which a person or organization is denied the use of a platform (physical or increasingly virtual) on which to speak.
In addition to the banning of those with socially unacceptable viewpoints, there has been a long history of marginalized voices (particularly trans, LGBTQ, sex workers, etc.) being deplatformed in systematic ways.
The banning can be from any of a variety of spaces ranging from physical meeting spaces or lectures, journalistic coverage in newspapers or television to domain name registration, web hosting, and even from specific social media platforms like Facebookor Twitter. Some have used these terms as narrowly as in relation to having their Twitter “verified” status removed.
“We need to puncture this myth that [deplatforming]’s only affecting far-right people. Trans rights activists, Black Lives Matterorganizers, LGBTQI people have been demonetized or deranked. The reason we’re talking about far-right people is that they have coverage on Fox News and representatives in Congress holding hearings. They already have political power.” — Deplatforming Works: Alex Jones says getting banned by YouTube and Facebook will only make him stronger. The research says that’s not true. in Motherboard 2018-08-10
Glenn Beck parted ways with Fox News in what some consider to have been a network deplatforming. He ultimately moved to his own platform consisting of his own website.
Milo Yiannopoulos, the former Breitbart personality, was permanently banned from Twitter in 2016 for inciting targeted harassment campaigns against actress Leslie Jones. He resigned from Breitbart over comments he made about pedophilia on a podcast. These also resulted in the termination of a book deal with Simon & Schuster as well as the cancellation of multiple speaking engagements at Universities.
The Daily Stormer
Neo-Nazi site The Daily Stormer was deplatformed by Cloudflare in the wake of 2017’s “Unite the Right” rally in Charlottesville. Following criticism, Matthew Prince, Cloudflare CEO, announced that he was ending the Daily Stormer’s relationship with Cloudflare, which provides services for protecting sites against distributed denial-of service (DDoS) attacks and maintaining their stability.
Alex Jones and his Infowars were deplatformed by Apple, Spotify, YouTube, and Facebook in late summer 2018 for his Network’s false claims about the Newtown shooting.
- Deplatforming Works in Motherboard
Gab.ai was deplatformed from PayPal, Stripe, Medium †, Apple, and Google as a result of their providing a platform for alt-right and racist groups as well as the shooter in the Tree of Life Synagogue shooting in October 2018
Gab.com is under attack. We have been systematically no-platformed by App Stores, multiple hosting providers, and several payment processors. We have been smeared by the mainstream media for defending free expression and individual liberty for all people and for working with law enforcement to ensure that justice is served for the horrible atrocity committed in Pittsburgh. Gab will continue to fight for the fundamental human right to speak freely. As we transition to a new hosting provider Gab will be inaccessible for a period of time. We are working around the clock to get Gab.com back online. Thank you and remember to speak freely.
—from the Gab.ai homepage on 2018-10-29
- Gab, the Social Media Site for the Alt-Right, Gets Deplatformed
- Microsoft threatened to stop hosting the alt-right’s favorite social network in Quartz 2018-08-10
- Face the Racist Nation from On The Media | WNYC Studios on 2018-08-31 includes a segment about deplatforming racist groups from news coverage in the early 1900’s.
- ‘By Whatever Means Necessary’: The Origins of the ‘No Platform Policy’ on Hatful of History on 2015-11-03
- Deplatforming Works: Alex Jones says getting banned by YouTube and Facebook will only make him stronger. The research says that’s not true. in Motherboard 2018-08-10
- Study finds Reddit’s controversial ban of its most toxic subreddits actually worked TechCrunch on 2017-09-11
- The case for quarantining extremist ideas by Joan Donovan and Danah Boyd in The Guardian on 2018-06/01
- You Can’t Stay Here: The Efficacy of Reddit’s 2015 Ban Examined Through Hate Speech Proc. ACM Hum.-Comput. Interact., Vol. 1, No. 2, Article 31. Publication date: November 2017.
Abstract: The News Study research report presents findings about how a sample of U.S. college students gather information and engage with news in the digital age. Results are included from an online survey of 5,844 respondents and telephone interviews with 37 participants from 11 U.S. colleges and universities selected for their regional, demographic, and red/blue state diversity. A computational analysis was conducted using Twitter data associated with the survey respondents and a Twitter panel of 135,891 college-age people. Six recommendations are included for educators, journalists, and librarians working to make students effective news consumers. To explore the implications of this study’s findings, concise commentaries from leading thinkers in education, libraries, media research, and journalism are included.
A great little paper about how teens and college students are finding, reading, sharing, and generally interacting with news. There’s some nice overlap here on both the topics of journalism and education which I find completely fascinating. In general, however, I think in a few places students are mis-reporting their general uses, so I’m glad a portion of the paper actually looks at data from Twitter in the wild to see what real world use cases actually are.
As I read this, I can’t help but think of some things I’ve seen Michael Caulfield writing about news and social media over the past several months. As I look, I notice that he’s already read and written a bit about a press release for this particular paper. I’ll have to take a look at his take on it tomorrow. I’m particularly interested in any insights he’s got on lateral reading and fake news above and beyond his prior thoughts.
Perhaps I missed it hiding in there reading so late at night, but another potentially good source for this paper’s recommended section would be Caulfield’s book Web Literacy for Student Fact-Checkers.
Highlights, Quotes, Annotations, & Marginalia
The purpose of this study was to better understand the preferences, practices, and motivations of young news consumers, while focusing on what students actually do, rather than what they do not do. ❧
October 22, 2018 at 08:28PM
YouTube (54%), Instagram (51%) or Snapchat (55%) ❧
I’m curious to know which sources in particular they’re using on these platforms. Snapchat was growing news sources a year ago, but I’ve heard those sources are declining. What is the general quality of these sources?
For example, getting news from television can range from PBS News Hour and cable news networks (more traditional sources) to comedy shows like Stephen Colbert and The Daily Show with Trevor Noah which have some underlying news in the comedy, but are far from traditional sources.
October 22, 2018 at 08:35PM
Some students (28%) received news from podcasts in the preceding week. ❧
October 22, 2018 at 08:38PM
news is stressful and has little impact on the day-to-day routines —use it for class assignments, avoid it otherwise.” While a few students like this one practiced news abstinence, such students were rare. ❧
This sounds a bit like my college experience, though I didn’t avoid it because of stressful news (and there wasn’t social media yet). I generally missed it because I didn’t subscribe directly to publications or watch much television. Most of my news consumption was the local college newspaper.
October 22, 2018 at 08:46PM
But on the Web, stories of all kinds can show up anywhere and information and news are all mixed together. Light features rotate through prominent spots on the “page” with the same weight as breaking news, sports coverage, and investigative pieces, even on mainstream news sites. Advertorial “features” and opinion pieces are not always clearly identified in digitalspaces. ❧
This difference is one of the things I miss about reading a particular newspaper and experiencing the outlet’s particular curation of their own stories. Perhaps I should spend more time looking at the “front page” of various news sites?
October 22, 2018 at 08:57PM
Some (36%) said they agreed that the threat of “‘fake news’ had made them distrust the credibility of any news.” Almost half (45%) lacked confidence with discerning “real news” from “fake news,” and only 14% said they were “very confident” that they could detect “fake news.” ❧
These numbers are insane!
October 22, 2018 at 09:04PM
As a matter of recourse, some students in the study “read the news laterally,” meaning they used sources elsewhere on the Internet to compare versions of a story in an attempt to verify its facts, bias, and ultimately, its credibility.25 ❧
This reminds me how much I miss the old daily analysis that Slate use to do for the day’s top news stories in various outlets in their Today’s Papers segment.
October 22, 2018 at 09:15PM
Some respondents, though not all, did evaluate the veracity of news they shared on social media. More (62%) said they checked to see how current an item was, while 59% read the complete story before sharing and 57% checked the URL to see where a story originated (Figure 7). Fewer read comments about a post (55%) or looked to see how many times an item was tweeted or shared (39%). ❧
I’m not sure I believe these self-reported numbers at all. 59% read the complete story before sharing?! 57% checked the URL? I’ll bet that not that many could probably define what a URL is.
October 22, 2018 at 10:00PM
information diet ❧
October 22, 2018 at 11:02PM
At the tactical level, there are likely many small things that could be tested with younger audiences to help them better orient themselves to the crowded news landscape. For example, some news organizations are more clearly identifying different types of content such as editorials, features, and backgrounders/news analysis.57More consistent and more obvious use of these typological tags would help all news consumers, not just youth, and could also travel with content as itis posted and shared in social media. News organizations should engage more actively with younger audiences to see what might be helpful. ❧
October 22, 2018 at 11:37PM
When news began moving into the first digital spaces in the early 1990s, pro-Web journalists touted the possibilities of hypertext links that would give news consumers the context they needed. Within a couple of years, hypertext links slowly began to disappear from many news stories. Today, hypertext links are all but gone from most mainstream news stories. ❧
October 22, 2018 at 11:38PM
October 22, 2018 at 11:40PM
Over the last year, I was fortunate to help guide a study of the news consumption habits of college students, and coordinate Northeastern University Library’s services for the study, including great work by our data visualization specialist Steven Braun and necessary infrastructure from our digital team, including Sarah Sweeney and Hillary Corbett. “How Students Engage with News,” out today as both a long article and accompanying datasets and media, provides a full snapshot of how college students navigate our complex and high-velocity media environment.
Highlights, Quotes, Annotations, & Marginalia
Side note: After recently seeing Yale Art Gallery’s show “Seriously Funny: Caricature Through the Centuries,” I think there’s a good article to be written about the historical parallels between today’s visual memes and political cartoons from the past. ❧
This also makes me think back to other entertainments of the historical poor including the use/purpose of stained glass windows in church supposedly as a means of entertaining the illiterate Latin vulgate masses.
October 22, 2018 at 08:07PM
nearly 6,000 students from a wide variety of institutions ❧
Institutions = colleges/universities? Or are we also considering less educated youth as well?
October 22, 2018 at 08:08PM
A more active stance by librarians, journalists, educators, and others who convey truth-seeking habits is essential. ❧
In some sense these people can also be viewed as aggregators and curators of sorts. How can their work be aggregated and be used to compete with the poor algorithms of social media?
October 22, 2018 at 08:11PM
Welcome to the ‘Anatomy of an AI System’
I recently purged the data from my Facebook account. This effort was shockingly labour intensive: it took a browser script all weekend to crunch, and still many aspects of the process required manual execution. Torching years and years of old Facebook activity felt so liberating that I found another...
A short, but solid piece on why James has left social media and consciously moved to his own blog and feed reader. I’m curious what his thoughts are a bit on into his experience. He’s definitely worth a follow.
Last week was the 8th annual IndieWeb Summit held in Portland, Oregon. While IndieWeb Camps and Summits have traditionally been held on weekends during people’s free time, this one held in the middle of the week was a roaring success. With well over 50 people in attendance, this was almost certainly the largest attendance I’ve seen to date. I suspect since people who flew in for the event had really committed, the attendance on the second day was much higher than usual as well. It was great to see so many people hacking on their personal websites and tools to make their personal online experiences richer.
The year of the Indie Reader
Last year I wrote the post Feed Reader Revolution in response to an increasingly growing need I’ve seen in the social space for a new sort of functionality in feed readers. While there have been a few interesting attempts like Woodwind which have shown a proof-of-concept, not much work had been done until some initial work by Aaron Parecki and a session at last year’s IndieWeb Summit entitled Putting it all Together.
Over the past year I’ve been closely watching Aaron Parecki; Grant Richmond and Jonathan LaCour; Eddie Hinkle; and Kristof De Jaeger’s collective progress on the microsub specification as well as their respective projects Aperture/Monocle; Together; Indigenous/Indigenous for iOS; and Indigenous for Android. As a result in early May I was overjoyed to suggest a keynote session on readers and was stupefied this week as many of them have officially launched and are open to general registration as relatively solid beta web services.
I spent a few minutes in a session at the end of Tuesday and managed to log into Aperture and create an account (#16, though I suspect I may be one of the first to use it besides the initial group of five developers). I also managed to quickly and easily add a microsub endpoint to my website as well. Sadly I’ve got some tweaks to make to my own installation to properly log into any of the reader app front ends. Based on several of the demos I’ve seen over the past months, the functionality involved is not only impressive, but it’s a properly large step ahead of some of the basic user interface provided by the now-shuttered Woodwind.xyz service (though the code is still available for self-hosting.)
Several people have committed to make attempts at creating a microsub server including Jack Jamieson who has announced an attempt at creating one for WordPress after having recently built the Yarns reader for WordPress from scratch this past year. I suspect within the coming year we’ll see one or two additional servers as well as some additional reading front ends. In fact, Ryan Barrett spent the day on Wednesday hacking away at leveraging the News Blur API and leveraging it to make News Blur a front end for Aperture’s server functionality. I’m hoping others may do the same for other popular readers like Feedly or Inoreader to expand on the plurality of offerings. Increased competition for new reader offerings can only improve the entire space.
Even more reading related support
Just before the Summit, gRegor Morrill unveiled the beta version of his micropub client Indiebookclub.biz which allows one to log in with their own website and use it to post reading updates to their own website. For those who don’t yet support micropub, the service saves the data for eventual export. His work on it continued through the summit to continue to improve an already impressive product. It’s the fist micropub client of its kind amidst a growing field of websites (including WordPress and WithKnown which both have plugins) that offer reading post support. Micro.blog has recently updated its code to allow users of the platform the ability to post reads with indiebookclub.biz as well. As a result of this spurt of reading related support there’s now a draft proposal to add
read-status support as new Microformats. Perhaps reads will be included in future updates of the post-type-discovery algorithm as well?
Given the growth of reading post support and a new micropub read client, I suspect it won’t take long before some of the new microsub-related readers begin supporting read post micropub functionality as well.
In addition to David Shanske’s recent valiant update to the IndieAuth plugin for WordPress, Manton Reece managed to finish up coding work to unveil another implementation of IndieAuth at the Summit. His version is for the micro.blog platform which is a significant addition to the community and will add several hundred additional users who will have broader access to a wide assortment of functionality as a result.
While work continues apace on a broad variety of fronts, I was happy to see that my proposal for a session on IndieAlgorithms was accepted (despite my leading another topic earlier in the day). It was well attended and sparked some interesting discussion about how individuals might also be able to exert greater control over what they’re presented to consume. With the rise of Indie feed readers this year, the ability to better control and filter one’s incoming content is going to take on a greater importance in the very near future. With an increasing number of readers to choose from, more people will hopefully be able to free themselves from the vagaries of the blackbox algorithms that drive content distribution and presentation in products like Facebook, Twitter, Instagram and others. Based on the architecture of servers like Aperture, perhaps we might be able to modify some of the microsub spec to allow more freedom and flexibility in what will assuredly be the next step in the evolution of the IndieWeb?
While there are miles and miles to go before we sleep, I was happy to have seen a session on diversity pop up at the Summit. I hope we can all take the general topic to heart to be more inclusive and actively invite friends into our fold. Thanks to Jean for suggesting and guiding the conversation and everyone else for continuing it throughout the rest of the summit and beyond.
Naturally, the above are just a few of the bigger highlights as I perceive them. I’m sure others will appear in the IndieNews feed or other blogposts about the summit. The IndieWeb is something subtly different to each person, so I hope everyone takes a moment to share (on your own sites naturally) what you got out of all the sessions and discussions. There was a tremendous amount of discussion, debate, and advancement of the state of the art of the continually growing IndieWeb. Fortunately almost all of it was captured in the IndieWeb chat, on Twitter, and on video available through either the IndieWeb wiki pages for the summit or directly from the IndieWeb YouTube channel.
I suspect David Shanske and I will have more to say in what is sure to be a recap episode in our next podcast.
Finally, below I’m including a bunch of photos I took over the course of my trip. I’m far from a professional photographer, but hopefully they’ll give a small representation of some of the fun we all had at camp.
While I’m thinking about it, I wanted to take a moment to thank everyone who came to the summit. You all really made it a fantastic event!
I’d particularly like to thank Aaron Parecki, Tantek Çelik, gRegor Morrill, Marty McGuire, and David Shanske who did a lot of the organizing and volunteer work to help make the summit happen as well as to capture it so well for others to participate remotely or even view major portions of it after-the-fact. I would be remiss if I didn’t thank Martijn van der Ven for some herculean efforts on IRC/Chat in documenting things in real time as well as for some serious wiki gardening along the way. As always, there are a huge crew of others whose contributions large and small help to make up the rich fabric of the community and we wouldn’t be who we are without your help. Thank you all! (Or as I might say in chat: community++).
And finally, a special personal thanks to Greg McVerry for kindly letting me join him at the Hotel deLuxe for some late night discussions on the intersection of IndieWeb and Domain of One’s Own philosophies as they dovetail with the education sector. With growing interest and a wealth of ideas in this area, I’m confident it’s going to be a rapidly growing one over the coming years.
I’d also like to take a moment to say thanks to all the sponsors who helped to make the event a success including Name.com, GoDaddy, Okta, Mozilla, DreamHost, and likely a few others who I’m missing at the moment.
I’d also like to thank the Eliot Center for letting us hosting the event at their fabulous facility.
2016 was the year that the likes of Instagram and Twitter decided they knew better than you what content you wanted to see in your feeds.
use algorithms to decide on what individual users most wanted to see. Depending on our friendships and actions, the system might deliver old news, biased news, or news which had already been disproven.
2016 was the year of politicians telling us what we should believe, but it was also the year of machines telling us what we should want.
The only way to insure your posts gain notice is to bombard the feed and hope that some stick, which risks comprising on quality and annoying people.
Sreekumar added: “Interestingly enough, the change was made after Instagram opened the doors to brands to run ads.” But even once they pay for visibility, a brand under pressure to remain engaging: “Playing devil’s advocate for a second here: All the money in the world cannot transform shitty content into good content.”
Artificially limiting reach of large accounts to then turn around and demand extortion money? It’s the social media mafia!
It disorients the reader, and distracts them with endless, timeless content.
Verified accounts turning themselves into bots, millions of fake likes and comments, a dirty world of engagement trading inside Telegram groups. Welcome to the secret underbelly of Instagram.
Eventually there will be so much noise on these platforms that they will cease to have any meaning for the business purposes that people are intending to use them for.
Worse, they’re giving away their login credentials to outsiders to do this.
We're building an artificial intelligence-powered dystopia, one click at a time, says techno-sociologist Zeynep Tufekci. In an eye-opening talk, she details how the same algorithms companies like Facebook, Google and Amazon use to get you to click on ads are also used to organize your access to political and social information. And the machines aren't even the real threat. What we need to understand is how the powerful might use AI to control us -- and what we can do in response.
Machine intelligence is here, and we're already using it to make subjective decisions. But the complex way AI grows and improves makes it hard to understand and even harder to control. In this cautionary talk, techno-sociologist Zeynep Tufekci explains how intelligent machines can fail in ways that don't fit human error patterns -- and in ways we won't expect or be prepared for. "We cannot outsource our responsibilities to machines," she says. "We must hold on ever tighter to human values and human ethics."
The underlying problem here is the database has the polygon for the building but not the exact point of the front door. So it guesses a point by filling in the centroid of the polygon. Which is kinda close but not close enough. A better heuristic may be “center of the polyline that faces the matching street”. That’s also going to be wrong sometimes, but less often.
Some interesting issues with online maps.