Do businesses that rely on a low response rate of 1-2% and succeed have something in common? Could they all be considered predatory?
2016 was the year that the likes of Instagram and Twitter decided they knew better than you what content you wanted to see in your feeds.
use algorithms to decide on what individual users most wanted to see. Depending on our friendships and actions, the system might deliver old news, biased news, or news which had already been disproven.
2016 was the year of politicians telling us what we should believe, but it was also the year of machines telling us what we should want.
The only way to insure your posts gain notice is to bombard the feed and hope that some stick, which risks comprising on quality and annoying people.
Sreekumar added: “Interestingly enough, the change was made after Instagram opened the doors to brands to run ads.” But even once they pay for visibility, a brand under pressure to remain engaging: “Playing devil’s advocate for a second here: All the money in the world cannot transform shitty content into good content.”
Artificially limiting reach of large accounts to then turn around and demand extortion money? It’s the social media mafia!
It disorients the reader, and distracts them with endless, timeless content.
In data-hungry, tech-happy chain restaurants, customers are rating their servers using tabletop tablets, not realizing those ratings can put jobs at risk.
And Ziosk could be a roundabout way for employers to discriminate against employees. Employers are legally restricted from evaluating employees based gender, age, race, or appearance, according to Karen Levy, an assistant professor in the Department of Information Science at Cornell University — but nothing is stopping Ziosk users from doing that, even though those ratings can affect a worker’s pay or employment. “If you outsource that job to a consumer, you may be able to escape that,” she said.
“Customers who might discriminate against a certain class or group of workers can use the system to leave negative comments that would affect the workers,” said Cornell’s Ajunwa. She compared the restaurant system to student evaluations of professors, which determine the trajectory of their careers, and tend to be biased against women.
Having low scores posted for all coworkers to see was “very embarrassing,” said Steph Buja, who recently left her job as a server at a Chili’s in Massachusetts. But that’s not the only way customers — perhaps inadvertently — use the tablets to humiliate waitstaff. One diner at Buja’s Chili’s used Ziosk to comment, “our waitress has small boobs.”According to other servers working in Ziosk environments, this isn’t a rare occurrence.
This is outright sexual harrassment and appears to be actively creating a hostile work environment. I could easily see a class action against large chains and/or against the app maker themselves. Aggregating the data and using it in a smart way is fine, but I suspect no one in the chain is actively thinking about what they’re doing, they’re just selling an idea down the line. The maker of the app should be doing a far better job of filtering this kind of crap out and aggregating the data in a smarter way and providing a better output since the major chains they’re selling it to don’t seem to be capable of processing and disseminating what they’re collecting.
This is the transcript of the talk I gave this afternoon at a CUNY event on "The Labor of Open"
While reading this I was initially worried that it was a general rehash of some of her earlier work and thoughts which I’ve read several times in various incarnations. However, the end provided a fantastic thesis about unseen labor which should be more widely disseminated.
almost all the illustrations in this series – and there are 50 of these in all – involve “work” (or the outsourcing and obscuring of work). Let’s look at a few of these (and as we do so, think about how work is depicted – whose labor is valued, whose labor is mechanized, who works for whom, and so on.
What do machines free us from? Not drudgery – not everyone’s drudgery, at least. Not war. Not imperialism. Not gendered expectations of beauty. Not gendered expectations of heroism. Not gendered divisions of labor. Not class-based expectations of servitude. Not class-based expectations of leisure.
And so similarly, what is the digital supposed to liberate us from? What is rendered (further) invisible when we move from the mechanical to the digital, when we cannot see the levers and the wires and the pulleys.
As I look back upon the massive wealth compiled by digital social companies for what is generally a middling sort of job that they’re not paying nearly as much attention to as they ought (Facebook, Twitter, et al.) and the recent mad rush to comply with GDPR, I’m even more struck by what she’s saying here. All this value they have “created” isn’t really created by them directly, it’s done by the “invisible labor” of billions of people and then merely captured by their systems, which they’re using to actively disadvantage us in myriad ways.
I suppose a lot of it all boils down to the fact that we’re all slowly losing our humanity when we fail to exercise it and see the humanity and value in others.
The bigger problem Watters doesn’t address is that with the advent of this digital revolution, we’re sadly able to more easily and quickly marginalize, devalue, and shut out others than we were before. If we don’t wake up to our reality, our old prejudices are going to destroy us. Digital gives us the ability to scale these problems up at a staggering pace compared with the early 1900’s.
A simple and solid example can be seen in the way Facebook has been misused and abused in Sri Lanka lately. Rumors and innuendo have been able to be spread in a country unchecked by Facebook (primarily through apathy) resulting in the deaths of countless people. Facebook doesn’t even have a handle on their own scale problems to prevent these issues which are akin to allowing invading conquistadores from Spain the ability to bring guns, germs, and steel into the New World to decimate untold millions of innocent indigenous peoples. Haven’t we learned our lessons from history? Or are we so intent on bringing them into the digital domain? Cathy O’Neil and others would certainly say we’re doing exactly this with “weapons of math destruction.”
A series of damning posts on Facebook has stoked longstanding ethnic tensions in Sri Lanka, setting off a wave of violence largely directed at Muslims. How are false rumors on social media fueling real-world attacks?
On today’s episode:
• Fraudulent claims of a Muslim plot to wipe out Sri Lanka’s Buddhist majority, widely circulated on Facebook and WhatsApp, have led to attacks on mosques and Muslim-owned homes and shops in the country.
• Facebook’s algorithm-driven news feed promotes whatever content draws the most engagement — which tend to be the posts that provoke negative, primal emotions like fear and anger. The platform has allowed misinformation to run rampant in countries with weak institutions and a history of deep social distrust.
There’s so much to think about and process here, that I’ll have to re-read and think more specifically about all the details. I hope to come back to this later to mark it up and annotate it further.
I’ve read relatively deeply about a variety of privacy issues as well as the weaponization of data and its improper use by governments and businesses to unduly influence people. For those who are unaware of this movement over the recent past, I would highly recommend Cathy O’Neil’s text Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, which provides an excellent overview with a variety of examples about how the misuse of data can be devastating not just to individuals who are broadly unaware of it, but entire segments of society.
There is a lot of publicly available data we reveal via social media and much of it one might flippantly consider “data exhaust” which has little, if any inherent value by itself. Unfortunately when used in aggregate, it can reveal striking things about us which we may either not be aware of ourselves or which we wouldn’t want to be openly known.
My brief thought here is that much like the transition from the use of smaller arms and handguns, which can kill people in relatively small numbers, to weapons like machine guns on up to nuclear weapons, which have the ability to quickly murder hundreds to millions at a time, we will have to modify some of our social norms the way we’ve modified our “war” norms over the past century. We’ll need to modify our personal social contracts so that people can still interact with each other on a direct basis without fear of larger corporations, governments, or institutions aggregating our data, processing it, and then using it against us in ways which unduly benefit them and tremendously disadvantage us as individuals, groups, or even at the level of entire societies.
In my mind, we need to protect the social glue that holds society together and improves our lives while not allowing the mass destruction of the fabric of society by large groups based on their ability to aggregate, process, and use our own data against us.
Thank you Sebastian for kicking off a broader conversation!
Disclaimer: I’m aware that in posting this to my own site that it will trigger a tacit webmention which will ping Sebastian Greger’s website. I give him permission to display any and all data he chooses from the originating web page in perpetuity, or until such time as I send a webmention either modifying or deleting the content of the originating page. I say this all with some jest, while I am really relying on the past twenty years of general social norms built up on the internet and in general society as well as the current practices of the IndieWeb movement to govern what he does with this content.
Machine intelligence is here, and we're already using it to make subjective decisions. But the complex way AI grows and improves makes it hard to understand and even harder to control. In this cautionary talk, techno-sociologist Zeynep Tufekci explains how intelligent machines can fail in ways that don't fit human error patterns -- and in ways we won't expect or be prepared for. "We cannot outsource our responsibilities to machines," she says. "We must hold on ever tighter to human values and human ethics."
What concerns me is that this is just one aspect of a kind of infrastructural violence being done to all of us, all of the time, and we’re still struggling to find a way to even talk about it, to describe its mechanisms and its actions and its effects.
I think this fits the definition of a Weapon of Math Destruction.
In 2015, a manager discovered a trove of accounts with Russian and Ukrainian IP addresses
A lot of your post also reminds me of Bryan Alexander’s relatively recent post I defy the world and to go back to RSS.
I completely get the concept of what you’re getting at with harkening back to the halcyon days of RSS. I certainly love, use, and rely on it heavily both for consumption as well as production. Of course there’s also still the competing standard of Atom still powering large parts of the web (including GNU Social networks like Mastodon). But almost no one looks back fondly on the feed format wars…
I think that while many are looking back on the “good old days” of the web, that we not forget the difficult and fraught history that has gotten us to where we are. We should learn from the mistakes made during the feed format wars and try to simplify things to not only move back, but to move forward at the same time.
Today, the easier pared-down standards that are better and simpler than either of these old and and difficult specs is simply adding Microformat classes to HTML (aka P.O.S.H) to create feeds. Unless one is relying on pre-existing infrastructure like WordPress, building and maintaining RSS feed infrastructure can be difficult at best, and updates almost never occur, particularly for specifications that support new social media related feeds including replies, likes, favorites, reposts, etc. The nice part is that if one knows how to write basic html, then one can create a simple feed by hand without having to learn the mark up or specifics of RSS. Most modern feed readers (except perhaps Feedly) support these new h-feeds as they’re known. Interestingly, some CMSes like WordPress support Microformats as part of their core functionality, though in WordPress’ case they only support a subsection of Microformats v1 instead of the more modern v2.
For those like you who are looking both backward and simultaneously forward there’s a nice chart of “Lost Infractructure” on the IndieWeb wiki which was created following a post by Anil Dash entitled The Lost Infrastructure of Social Media. Hopefully we can take back a lot of the ground the web has lost to social media and refashion it for a better and more flexible future. I’m not looking for just a “hipster-web”, but a new and demonstrably better web.
Some of the desire to go back to RSS is built into the problems we’re looking at with respect to algorithmic filtering of our streams (we’re looking at you Facebook.) While algorithms might help to filter out some of the cruft we’re not looking for, we’ve been ceding too much control to third parties like Facebook who have different motivations in presenting us material to read. I’d rather my feeds were closer to the model of fine dining rather than the junk food that the-McDonald’s-of-the-internet Facebook is providing. As I’m reading Cathy O’Neil’s book Weapons of Math Distraction, I’m also reminded that the black box that Facebook’s algorithm is is causing scale and visibility/transparency problems like the Russian ad buys which could have potentially heavily influenced the 2017 election in the United States. The fact that we can’t see or influence the algorithm is both painful and potentially destructive. If I could have access to tweaking a third-party transparent algorithm, I think it would provide me a lot more value.
As for OPML, it’s amazing what kind of power it has to help one find and subscribe to all sorts of content, particularly when it’s been hand curated and is continually self-dogfooded. One of my favorite tools are readers that allow one to subscribe to the OPML feeds of others, that way if a person adds new feeds to an interesting collection, the changes propagate to everyone following that feed. With this kind of simple technology those who are interested in curating things for particular topics (like the newsletter crowd) or even creating master feeds for class material in a planet-like fashion can easily do so. I can also see some worthwhile uses for this in journalism for newspapers and magazines. As an example, imagine if one could subscribe not only to 100 people writing about #edtech, but to only their bookmarked articles that have the tag edtech (thus filtering out their personal posts, or things not having to do with edtech). I don’t believe that Feedly supports subscribing to OPML (though it does support importing OPML files, which is subtly different), but other readers like Inoreader do.
I’m hoping to finish up some work on my own available OPML feeds to make subscribing to interesting curated content a bit easier within WordPress (over the built in, but now deprecated link manager functionality.) Since you mentioned it, I tried checking out the OPML file on your blog hoping for something interesting in the #edtech space. Alas… 😉 Perhaps something in the future?
I don’t think she’s used the specific words in the book yet, but O’Neil is fundamentally writing about social justice and transparency. To a great extent both governments and increasingly large corporations are using these Weapons of Math Destruction inappropriately. Often it may be the case that the algorithms are so opaque as to be incomprehensible by their creators/users, but, as I suspect in many cases, they’re being used to actively create social injustice by benefiting some classes and decimating others. The evolving case of Facebook’s involvement in potentially shifting the outcome of the 2016 Presidential election especially via “dark posts” is an interesting case in point with regard to these examples.
In some sense these algorithms are like viruses running rampant in a large population without the availability of antibiotics to tamp down or modify their effects. Without feedback mechanisms and the ability to see what is going on as it happens the scale issue she touches on can quickly cause even greater harm over short periods of time.
I like that one of the first examples she uses for modeling is that of preparing food for a family. It’s simple, accessible, and generic enough that the majority of people can relate directly to it. It has lots of transparency (even more than her sabermetrics example from baseball). Sadly, however, there is a large swath of the American population that is poor, uneducated, and living in horrific food deserts that they may not grasp the subtleties of even this simple model. As I was reading, it occurred to me that there is a reasonable political football that gets pushed around from time to time in many countries that relates to food and food subsidies. In the United States it’s known as the Supplemental Nutrition Assistance Program (aka SNAP) and it’s regularly changing, though fortunately for many it has some nutritionists who help to provide a feedback mechanism for it. I suspect it would make a great example of the type of Weapon of Mass Destruction she’s discussing in this book. Those who are interested in a quick overview of it and some of the consequences can find a short audio introduction to it via the Eat This Podcast episode How much does a nutritious diet cost? Depends what you mean by “nutritious” or Crime and nourishment Some costs and consequences of the Supplemental Nutrition Assistance Program which discusses an interesting crime related sub-consequence of something as simple as when SNAP benefits are distributed.
I suspect that O’Neil won’t go as far as to bring religion into her thesis, so I’ll do it for her, but I’ll do so from a more general moral philosophical standpoint which underpins much of the Judeo-Christian heritage so prevalent in our society. One of my pet peeves of moralizing (often Republican) conservatives (who often both wear their religion on their sleeves as well as beat others with it–here’s a good recent case in point) is that they never seem to follow the Golden Rule which is stated in multiple ways in the Bible including:
He will reply, ‘Truly I tell you, whatever you did not do for one of the least of these, you did not do for me.
In a country that (says it) values meritocracy, much of the establishment doesn’t seem to put much, if any value, into these basic principles as they would like to indicate that they do.
I’ve previously highlighted the application of mathematical game theory before briefly in relation to the Golden Rule, but from a meritocracy perspective, why can’t it operate at all levels? By this I’ll make tangential reference to Cesar Hidalgo‘s thesis in his book Why Information Grows in which he looks not at just individuals (person-bytes), but larger structures like firms/companies (firmbytes), governments, and even nations. Why can’t these larger structures have their own meritocracy? When America “competes” against other countries, why shouldn’t it be doing so in a meritocracy of nations? To do this requires that we as individuals (as well as corporations, city, state, and even national governments) need to help each other out to do what we can’t do alone. One often hears the aphorism that “a chain is only as strong as it’s weakest link”, why then would we actively go out of our way to create weak links within our own society, particularly as many in government decry the cultures and actions of other nations which we view as trying to defeat us? To me the statistical mechanics of the situation require that we help each other to advance the status quo of humanity. Evolution and the Red Queeen Hypothesis dictates that humanity won’t regress back to the mean, it may be regressing itself toward extinction otherwise.
Highlights, Quotes, & Marginalia
You can often see troubles when grandparents visit a grandchild they haven’t seen for a while.
Highlight (yellow) page 22 | Location 409-410
Added on Thursday, October 12, 2017 11:19:23 PM
Upon meeting her a year later, they can suffer a few awkward hours because their models are out of date.
Highlight (yellow) page 22 | Location 411-412
Added on Thursday, October 12, 2017 11:19:41 PM
Racism, at the individual level, can be seen as a predictive model whirring away in billions of human minds around the world. It is built from faulty, incomplete, or generalized data. Whether it comes from experience or hearsay, the data indicates that certain types of people have behaved badly. That generates a binary prediction that all people of that race will behave that same way.
Highlight (yellow) page 22 | Location 416-420
Added on Thursday, October 12, 2017 11:20:34 PM
Needless to say, racists don’t spend a lot of time hunting down reliable data to train their twisted models.
Highlight (yellow) page 23 | Location 420-421
Added on Thursday, October 12, 2017 11:20:52 PM
the workings of a recidivism model are tucked away in algorithms, intelligible only to a tiny elite.
Highlight (yellow) page 25 | Location 454-455
Added on Thursday, October 12, 2017 11:24:46 PM
A 2013 study by the New York Civil Liberties Union found that while black and Latino males between the ages of fourteen and twenty-four made up only 4.7 percent of the city’s population, they accounted for 40.6 percent of the stop-and-frisk checks by police.
Highlight (yellow) page 25 | Location 462-463
Added on Thursday, October 12, 2017 11:25:50 PM
So if early “involvement” with the police signals recidivism, poor people and racial minorities look far riskier.
Highlight (yellow) page 26 | Location 465-466
Added on Thursday, October 12, 2017 11:26:15 PM
The questionnaire does avoid asking about race, which is illegal. But with the wealth of detail each prisoner provides, that single illegal question is almost superfluous.
Highlight (yellow) page 26 | Location 468-469
Added on Friday, October 13, 2017 6:01:28 PM
judge would sustain it. This is the basis of our legal system. We are judged by what we do, not by who we are.
Highlight (yellow) page 26 | Location 478-478
Added on Friday, October 13, 2017 6:02:53 PM
(And they’ll be free to create them when they start buying their own food.) I should add that my model is highly unlikely to scale. I don’t see Walmart or the US Agriculture Department or any other titan embracing my app and imposing it on hundreds of millions of people, like some of the WMDs we’ll be discussing.
You have to love the obligatory parental aphorism about making your own rules when you have your own house.
Yet the US SNAP program does just this. It could be an interesting example of this type of WMD.
Highlight (yellow) page 28 | Location 497-499
Added on Friday, October 13, 2017 6:06:04 PM
three kinds of models.
namely: baseball, food, recidivism
Highlight (yellow) page 27 | Location 489-489
Added on Friday, October 13, 2017 6:08:26 PM
The first question: Even if the participant is aware of being modeled, or what the model is used for, is the model opaque, or even invisible?
Highlight (yellow) page 28 | Location 502-503
Added on Friday, October 13, 2017 6:08:59 PM
many companies go out of their way to hide the results of their models or even their existence. One common justification is that the algorithm constitutes a “secret sauce” crucial to their business. It’s intellectual property, and it must be defended,
Highlight (yellow) page 29 | Location 513-514
Added on Friday, October 13, 2017 6:11:03 PM
the second question: Does the model work against the subject’s interest? In short, is it unfair? Does it damage or destroy lives?
Highlight (yellow) page 29 | Location 516-518
Added on Friday, October 13, 2017 6:11:22 PM
While many may benefit from it, it leads to suffering for others.
Highlight (yellow) page 29 | Location 521-522
Added on Friday, October 13, 2017 6:12:19 PM
The third question is whether a model has the capacity to grow exponentially. As a statistician would put it, can it scale?
Highlight (yellow) page 29 | Location 524-525
Added on Friday, October 13, 2017 6:13:00 PM
scale is what turns WMDs from local nuisances into tsunami forces, ones that define and delimit our lives.
Highlight (yellow) page 30 | Location 526-527
Added on Friday, October 13, 2017 6:13:20 PM
So to sum up, these are the three elements of a WMD: Opacity, Scale, and Damage. All of them will be present, to one degree or another, in the examples we’ll be covering
Think about this for a bit. Are there other potential characteristics?
Highlight (yellow) page 31 | Location 540-542
Added on Friday, October 13, 2017 6:18:52 PM
You could argue, for example, that the recidivism scores are not totally opaque, since they spit out scores that prisoners, in some cases, can see. Yet they’re brimming with mystery, since the prisoners cannot see how their answers produce their score. The scoring algorithm is hidden.
This is similar to anti-class action laws and arbitration clauses that prevent classes from realizing they’re being discriminated against in the workplace or within healthcare. On behalf of insurance companies primarily, many lawmakers work to cap awards from litigation as well as to prevent class action suits which show much larger inequities that corporations would prefer to keep quiet. Some of the recent incidences like the cases of Ellen Pao, Susan J. Fowler, or even Harvey Weinstein are helping to remedy these types of things despite individuals being pressured to stay quiet so as not to bring others to the forefront and show a broader pattern of bad actions on the part of companies or individuals. (This topic could be an extended article or even book of its own.)
Highlight (yellow) page 31 | Location 542-544
Added on Friday, October 13, 2017 6:20:59 PM
the point is not whether some people benefit. It’s that so many suffer.
Highlight (yellow) page 31 | Location 547-547
Added on Friday, October 13, 2017 6:23:35 PM
And here’s one more thing about algorithms: they can leap from one field to the next, and they often do. Research in epidemiology can hold insights for box office predictions; spam filters are being retooled to identify the AIDS virus. This is true of WMDs as well. So if mathematical models in prisons appear to succeed at their job—which really boils down to efficient management of people—they could spread into the rest of the economy along with the other WMDs, leaving us as collateral damage.
Highlight (yellow) page 31 | Location 549-552
Added on Friday, October 13, 2017 6:24:09 PM
Guide to highlight colors
Yellow–general highlights and highlights which don’t fit under another category below
Orange–Vocabulary word; interesting and/or rare word
Green–Reference to read
Red–Example to work through
I’m reading this as part of Bryan Alexander’s online book club.
Details on the functionality can be found at Share Your Kindle Notes and Highlights with Your Friends (Beta).
Based on the opening, I’m expecting some great examples many which are going to be as heavily biased as things like redlining seen in lending practices in the last century. They’ll come about as the result of missing data, missing assumptions, and even incorrect assumptions.
I’m aware that one of the biggest problems in so-called Big Data is that one needs to spend an inordinate amount of time cleaning up the data (often by hand) to get something even remotely usable. Even with this done I’ve heard about people not testing out their data and then relying on the results only to later find ridiculous error rates (sometimes over 100%!)
Of course there is some space here for the intelligent mathematician, scientist, or quant to create alternate models to take advantage of overlays in such areas, and particularly markets. By overlay here, I mean the gambling definition of the word in which the odds of a particular wager are higher than they should be, thus tending to favor an individual player (who typically has more knowledge or information about the game) rather than the house, which usually relies on a statistically biased game or by taking a rake off of the top of a parimutuel financial structure, or the bulk of other players who aren’t aware of the inequity. The mathematical models based on big data (aka Weapons of Math Destruction or WMDs) described here, particularly in financial markets, are going to often create such large inequities that users of alternate means can take tremendous advantage of the differences for their own benefits. Perhaps it’s the evolutionary competition that will more actively drive these differences to zero? If this is the case, it’s likely that it’s going to be a long time before they equilibrate based on current usage, especially when these algorithms are so opaque.
I suspect that some of this book will highlight uses of statistical errors and logical fallacies like cherry picking data, but which are hidden behind much more opaque mathematical algorithms thereby making them even harder to detect than simple policy decisions which use the simpler form. It’s this type of opacity that has caused major market shifts like the 2008 economic crash, which is still heavily unregulated to protect the masses.
I suspect that folks within Bryan Alexander’s book club will find that the example of Sarah Wysocki to be very compelling and damning evidence of how these big data algorithms work (or don’t work, as the case may be.) In this particular example, there are so many signals which are not only difficult to measure, if at all, that the thing they’re attempting to measure is so swamped with noise as to be unusable. Equally interesting, but not presented here, would be the alternate case of someone tremendously incompetent (perhaps who is cheating as indicated in the example) who actually scored tremendously high on the scale who was kept in their job.
Highlights, Quotes, & Marginalia
Do you see the paradox? An algorithm processes a slew of statistics and comes up with a probability that a certain person might be a bad hire, a risky borrower, a terrorist, or a miserable teacher. That probability is distilled into a score, which can turn someone’s life upside down. And yet when the person fights back, “suggestive” countervailing evidence simply won’t cut it. The case must be ironclad. The human victims of WMDs, we’ll see time and again, are held to a far higher standard of evidence than the algorithms themselves.
Highlight (yellow) – Introduction > Location xxxx
Added on Sunday, October 9, 2017
[WMDs are] opaque, unquestioned, and unaccountable, and they operate at a scale to sort, target or “optimize” millions of people. By confusing their findings with on-the-ground reality, most of them create pernicious WMD feedback loops.
Highlight (yellow) – Introduction > Location xxxx
Added on Sunday, October 9, 2017
The software is doing it’s job. The trouble is that profits end up serving as a stand-in, or proxy, for truth. We’ll see this dangerous confusion crop up again and again.
Highlight (yellow) – Introduction > Location xxxx
Added on Sunday, October 9, 2017
I’m reading this as part of Bryan Alexander’s online book club.
I’m definitely in for this general schedule and someone has already gifted me a copy of the book. Given the level of comments I suspect will come about, I’m putting aside the fact that this book wasn’t written for me as an audience and will read along with the crowd. I’m much more curious how Bryan’s audience will see and react to it. But I’m also interested in the functionality and semantics of an online book club run in such a distributed way.