As treasury secretary, Tim Geithner criticized predatory lenders. Now the private equity firm he leads runs a company that mails high-rate loans to risky customers.
Despite the risks, however, Mariner Finance is eager to gain new customers. The company declined to say how many unsolicited checks it mails out, but because only about 1 percent of recipients cash them, the number is probably in the millions. The “loans-by-mail” program accounted for 28 percent of Mariner’s loans issued in the third quarter of 2017, according to Kroll. Mariner’s two largest competitors, by contrast, rarely use the tactic.
Incidentally 1% is the response rate necessary to make spam email and fax financially viable. Coincidence?
Do businesses that rely on a low response rate of 1-2% and succeed have something in common? Could they all be considered predatory?
2016 was the year that the likes of Instagram and Twitter decided they knew better than you what content you wanted to see in your feeds.
use algorithms to decide on what individual users most wanted to see. Depending on our friendships and actions, the system might deliver old news, biased news, or news which had already been disproven.
2016 was the year of politicians telling us what we should believe, but it was also the year of machines telling us what we should want.
The only way to insure your posts gain notice is to bombard the feed and hope that some stick, which risks comprising on quality and annoying people.
Sreekumar added: “Interestingly enough, the change was made after Instagram opened the doors to brands to run ads.” But even once they pay for visibility, a brand under pressure to remain engaging: “Playing devil’s advocate for a second here: All the money in the world cannot transform shitty content into good content.”
Artificially limiting reach of large accounts to then turn around and demand extortion money? It’s the social media mafia!
It disorients the reader, and distracts them with endless, timeless content.
New data shows the impact of Facebook’s pullback from an industry it had dominated (and distorted).
(Roose, who has since deleted his tweet as part of a routine purge of tweets older than 30 days, told me it was intended simply as an observation, not a full analysis of the trends.)
Another example of someone regularly deleting their tweets at regular intervals. I’ve seem a few examples of this in academia.
It’s worth noting that there’s a difference between NewsWhip’s engagement stats, which are public, and referrals—that is, people actually clicking on stories and visiting publishers’ sites. The two have generally correlated, historically, and Facebook told me that its own data suggests that continues to be the case. But two social media professionals interviewed for this story, including one who consults for a number of different publications, told me that the engagement on Facebook posts has led to less relative traffic. This means publications could theoretically be seeing less ad revenue from Facebook even if their public engagement stats are holding steady.
From Slate’s perspective, a comment on a Slate story you see on Facebook is great, but it does nothing for the site’s bottom line.
(Remember when every news site published the piece, “What Time Is the Super Bowl?”)
This is a great instance for Google’s box that simply provides the factual answer instead of requiring a click through.
fickle audiences available on social platforms.
Here’s where feed readers without algorithms could provide more stability for news.
In data-hungry, tech-happy chain restaurants, customers are rating their servers using tabletop tablets, not realizing those ratings can put jobs at risk.
The lack of thought on behalf of these large restaurant chains is simply deplorable. If presented with a tablet or app like this at a restaurant, I’m simply going to get up and leave. I’ll actively boycott the use of such aggressive nonsense.
And Ziosk could be a roundabout way for employers to discriminate against employees. Employers are legally restricted from evaluating employees based gender, age, race, or appearance, according to Karen Levy, an assistant professor in the Department of Information Science at Cornell University — but nothing is stopping Ziosk users from doing that, even though those ratings can affect a worker’s pay or employment. “If you outsource that job to a consumer, you may be able to escape that,” she said.
“Customers who might discriminate against a certain class or group of workers can use the system to leave negative comments that would affect the workers,” said Cornell’s Ajunwa. She compared the restaurant system to student evaluations of professors, which determine the trajectory of their careers, and tend to be biased against women.
Having low scores posted for all coworkers to see was “very embarrassing,” said Steph Buja, who recently left her job as a server at a Chili’s in Massachusetts. But that’s not the only way customers — perhaps inadvertently — use the tablets to humiliate waitstaff. One diner at Buja’s Chili’s used Ziosk to comment, “our waitress has small boobs.”According to other servers working in Ziosk environments, this isn’t a rare occurrence.
This is outright sexual harrassment and appears to be actively creating a hostile work environment. I could easily see a class action against large chains and/or against the app maker themselves. Aggregating the data and using it in a smart way is fine, but I suspect no one in the chain is actively thinking about what they’re doing, they’re just selling an idea down the line. The maker of the app should be doing a far better job of filtering this kind of crap out and aggregating the data in a smarter way and providing a better output since the major chains they’re selling it to don’t seem to be capable of processing and disseminating what they’re collecting.
I am about to criticize and show examples from a copyright poster (or, for you new-fangled kids, an infographic) I received in the mail today from Turnitin, the anti-plagiarism company. Fair dealin…
Clint you’re dead on in your analysis here. Some of these things are definitely not plagiarism. Worse, they seem to be resorting to fearmongering.
I’m hoping that the marketing department of the company was just trying to round out a list of 10 things for their handy, but improper, infographic. Shame on them for spreading bad information in hopes that increased fear will help to sell their product.
To help fight poor information and to promote the raw power of remixing and extending, I’ll reference this excellent video from Matt Ridley:
This may well be the most comprehensive article I’ve read this year so far on the topic of the ethical responsibility of designers. Its author, Cabe, discusses “weaponised design”: “electronic systems whose designs either do not account for abusive application or whose user experiences directly empower attackers”.
Powerful tools are now available to anyone who wants to look for a DNA match, which has troubling privacy implications.
I find this mechanics relating to privacy in this case to be extremely similar to Facebook’s leak of data via Cambridge Analytica. Something crucial to your personal identity can be accidentally leaked out or be made discoverable to others by the actions of your closest family members.
Prior work established the benefits of server-recorded user engagement measures (e.g. clickthrough rates) for improving the results of search engines and recommendation systems. Client-side measures of post-click behavior received relatively little attention despite the fact that publishers have now the ability to measure how millions of people interact with their content at a fine resolution using client-side logging. In this study, we examine patterns of user engagement in a large, client-side log dataset of over 7.7 million page views (including both mobile and non-mobile devices) of 66,821 news articles from seven popular news publishers. For each page view we use three summary statistics: dwell time, the furthest position the user reached on the page, and the amount of interaction with the page through any form of input (touch, mouse move, etc.). We show that simple transformations on these summary statistics reveal six prototypical modes of reading that range from scanning to extensive reading and persist across sites. Furthermore, we develop a novel measure of information gain in text to capture the development of ideas within the body of articles and investigate how information gain relates to the engagement with articles. Finally, we show that our new measure of information gain is particularly useful for predicting reading of news articles before publication, and that the measure captures unique information not available otherwise.
Bookmarked to read as result of reading.
New metrics specifically for news articles.
I love that there’s research1 going on in this area and it portends some potentially great things for reading, but the devil’s advocate in me can also see a lot of adtech people salivating over the potential dark patterns lurking in such research. I can almost guarantee that Facebook is salivating over this, though to be honest, they’ve really pioneered the field haven’t they, just in a much smaller area of use. Of course I’m also curious if they did or are planning any research in how people read content on social media?
I wonder what it would look/feel like to take each of these modalities and apply them individually for long periods of time to everything one read? Or to use them in rotation regardless of the subject being read? Or other permutations? I suppose in general I like to read how I like to read, but now I’m going to be more conscious of what and how I’m doing it all.
Verified accounts turning themselves into bots, millions of fake likes and comments, a dirty world of engagement trading inside Telegram groups. Welcome to the secret underbelly of Instagram.
Eventually there will be so much noise on these platforms that they will cease to have any meaning for the business purposes that people are intending to use them for.
Worse, they’re giving away their login credentials to outsiders to do this.
Seen at a Harbin restaurant: swinging cradle for your phone, I’m told to cheat the “10k steps/day” test & qualify for health insurance discounts, presumably while you relax, eat & drink more, or have another cigarette. pic.twitter.com/LV0leTduAU
— 大山 Dashan (@akaDashan) April 28, 2018
Education and publishing giant Pearson is drawing criticism after using its software to experiment on over 9,000 math and computer science students across the country. In a paper presented Wednesday at the American Association of Educational Research, Pearson researchers revealed that they tested the effects of encouraging messages on students that used the MyLab Programming educational software during 2017's spring semester.