Welcome to the Indignation Machine
If you're not outraged, you're not paying for Wi-Fi.
Have you heard of the Euphemism Treadmill? It’s the idea that once a euphemism loses its power of gentle obfuscation, it becomes unacceptable to use it in polite society, thereby necessitating a new inoffensive euphemism to take its place. Inexorably this neutral replacement acquires the same negative connotation its predecessor had assumed, once more necessitating the adoption of another neutral term.
Here’s an innocuous example from Wikipedia: “In the 20th century, where the words lavatory or toilet were deemed inappropriate, they were sometimes replaced with bathroom or water closet, which in turn became restroom, W.C., or washroom.”
The Euphemism Treadmill is not a bad thing. Most examples deal with how we talk about groups of people. Many words now considered offensive slurs and deemed unacceptable were at first adopted to avoid giving offense. Despite evidence to the contrary in every online comments section ever, in their personal (i.e., real-life) interactions the vast majority of people truly do not wish to offend anyone around them, so the replacement terms catch on quickly. Until, of course, they begin to be misused and it comes time to discard them as well. On and on the treadmill goes.
Now it looks like we also have an Indignation Treadmill — a self-perpetuating self-righteousness machine.
We all know it as “The Internet.”
Forget Bitcoin. Is there any greater online currency than righteous outrage? The internet’s defining utility in the 21st century may be the power it grants individuals to one-up each other in expressions of sanctimony. Whether that sanctimony is earnest or merely theatrical posturing doesn’t even matter. If you’re not witnessing outrage (or expressing it yourself), you’re probably just offline.
A textbook example of the Indignation Treadmill was the recent hoopla around the social media app Yo. The only purpose of Yo is to send a message to friends, the content of each message being the same every time: “Yo.” That’s it.
Apparently, Yo began as a parody of social media itself (hint: it was released to the App Store on April Fool’s Day). The app’s creators were in on the joke all along: they intended to make something as superficial and vacuous (the more cynical among us would say timely) as possible. The developers’ irony was too subtle to stop over a million users from downloading and using it enthusiastically. Sure enough, Yo’s fast popularity offered yet another excuse for haters to declaim the rampant asininity of our times.
It’s a familiar story: Social media is a dumb, frivolous, brain-rotting emblem of a civilization in decline!
What’s interesting is that the indignation took on a life of its own. The next phase of the treadmill saw various people in the tech community mocking the mockers for taking the time to mock Yo at all. If Yo is so inconsequential and stupid, the argument goes, then why dedicate any serious thought to it? Surely thou doth protest too much?
Making fun of an app called Yo in 140 characters or less is a nice way to tell the world you lack the cognitive comprehension of irony.— ☁️David Ulevitch☁️ (@davidu) June 19, 2014
Some went further: Not only is Yo harmless; it’s also brilliant. In fact, these meta-contrarians claimed, Yo is an ingenious example of one-bit communication.
Naturally, after this display, it came time to mock the mockers of the mockers of Yo.
Then came this very blog post by your faithful commentator, decrying it all! (The treadmill may stop here since very few will read this).
Full disclosure: I’ve invested $350,000 in unmarked nonsequential Benjamins in Yo. OK not really. I doubt I’ll ever download it or use it or buy stock in it when it goes public, but I have no problem with Yo. Even if you are sympathetic to the general hypothesis of civilizational decay (as I am on some days), a flash-in-the-pan novelty app should hardly register in that discussion.
The main lesson of the Yo phenomenon has nothing to do with Yo, and everything to do with the sad fact that it’s super easy to get people all riled up over nothing these days.
This Week’s Two Minutes Hate: Facebook
Nobody’s paying much attention to Yo any more (live by the fickleness, die by the fickleness, yo).
Instead our various feeds are filled with outrage directed at a different target: Facebook. (Facebook is a social media company based in Menlo Park, California.)
The reason behind this latest run on the ol’ Indignation Treadmill? Facebook conducted a study. Perhaps you’ve heard about it.
The problem is that like most issues today (particularly in reportage on scientific research), the “facts” of the study were repeated over and over very tendentiously to the point that their bias and inaccuracies became everything that people know about the story. If you think Facebook “secretly faked posts in hundreds of thousands of its users’ feeds to manipulate their emotions,” then the BS made its way to you unscathed.
In his post titled “In Defense of Facebook,” Tal Yarkoni, Director of the Psychoinformatics Lab (!) and a research associate in the department of psychology at the University of Texas at Austin, meticulously makes the case for why the endlessly hyped original Atlantic piece, despite all the online rage it elicited, is so unimpressive. He makes several good arguments, but here are just four.
1. The measurable effect on the unwitting participants in the study was statistically negligible:
“The largest effect size reported had a Cohen’s d of 0.02 — meaning that eliminating a substantial proportion of emotional content from a user’s feed had the monumental effect of shifting that user’s own emotional word use by two hundredths of a standard deviation.” [Emphasis in the original.]
2. The miniscule extent of the measurable effects in the study — remember, the data Facebook released shows the quality of verbally expressed emotions was barely affected — itself does not prove that those expressing the emotions actually felt any differently. This critique stands even apart from the post hoc ergo propter hoc logical fallacy, which Dr. Yarkoni doesn’t mention.
3. Facebook didn’t add a whole bunch of emotional triggers to people’s feeds in order to psychologically mess with them. They didn’t add a single one:
“The suggestion that Facebook ‘manipulated users’ emotions’ is quite misleading. Framing it that way tacitly implies that Facebook must have done something specifically designed to induce a different emotional experience in its users. In reality, for users assigned to the experimental condition, Facebook simply removed a variable proportion of status messages that were automatically detected as containing positive or negative emotional words. Let me repeat that: Facebook removed emotional messages for some users. It did not, as many people seem to be assuming, add content specifically intended to induce specific emotions. Now, given that a large amount of content on Facebook is already highly emotional in nature — think about all the people sharing their news of births, deaths, break-ups, etc. — it seems very hard to argue that Facebook would have been introducing new risks to its users even if it had presented some of them with more emotional content. But it’s certainly not credible to suggest that replacing 10%–90% of emotional content with neutral content constitutes a potentially dangerous manipulation of people’s subjective experience.” (Emphasis in original.)
4. There’s one absolutely crucial point that shows how absurd the anger towards Facebook has been. Namely, that there has never been an experience on Facebook that has not been designed, manipulated, swayed, or controlled in some fashion. In Dr. Yarkoni’s words, it is “a completely contrived environment”:
“When you log onto Facebook, you’re not seeing a comprehensive list of everything your friends are doing, nor are you seeing a completely random subset of events.”
What do the outraged people think is happening each time they check Facebook? I don’t see how anyone could claim that it’s inherently wrong for Facebook to study and influence the experience of their users. Why would anybody create something like Facebook if they weren’t dead-set on doing that? Who would have invested in Facebook without feeling assured that they would do this continuously?
This is the crux of the matter. What is Facebook? The users who feel betrayed seem to think it’s some value-neutral web browser for a social mini-internet. If it feels that way, it’s only because Facebook has been so staggeringly good at what it set out to do.
Maybe Facebook is being punished for its own success. It is so good at connecting and entertaining and distracting and intriguing people that those very people think that Facebook belongs to them. To its users, it’s not a publicly traded company with investors to heed. It’s not a service that has been developed and endlessly fine-tuned to produce maximum enjoyment, all at great cost and individual toil and ingenuity. It is an entitlement, a right, something that is relegated beneath its users, an extension of their conscious experience that is to be altered only with their express permission. To expose the illogic of such a sentiment, one only has to point out that 1) Facebook has changed constantly over the last decade and 2) it currently has over a billion users. How could they make each person 100% happy? Obviously they can’t, so they try to make the most people as happy as they can — which is why they do studies like this and are right to.
That glosses over the fact that within the bounds of the law, what is “right” for the company is up to those who run it, not its users, who are free to part ways the minute they disagree and can no longer countenance some perceived offense to their sensibilities. Luckily, what is right for a company is more often than not right for its customers too, who often have the biggest say (thanks to studies like this one).
This isn’t to condone any or all actions by Facebook (or Google, etc.). It is in Facebook’s best interest to respond to its users and make them happy, and it’s reasonable to let Facebook know when it’s crossed a line (e.g., allowing military intelligence to latch on to their formidable data stores like a leech). Perhaps the revelations about the recent study provide reason enough for many to quit Facebook, but any such decisions should at least be made knowing the truth of how the research was conducted and why it’s not nearly as nefarious as it’s being portrayed.
Individuals draw their own lines about what they are willing to tolerate, and it is always possible to walk away. If Google is too scary, ditch Chrome and download Opera, which is supposed to be less snoopy. If you’re uncomfortable with Facebook looking into how you use Facebook so that they can make a better Facebook for you, you can stick with Snapchat or creep around LinkedIn or something.
Trigger Warnings: Parental Advisory Requested
It’s not just silly apps and perceived breaches of trust by tech giants that induce paroxysms of outrage. The perfect symbol of the cultural pathology behind the Indignation Treadmill might be the latest fad on college campuses: trigger warnings.
There was a time when it was politicians and other busybodies who led the charge to protect innocent, impressionable minds from “dangerous” content. Tipper Gore had it in for 2 Live Crew and Twisted Sister, and although she and her ilk were victorious in getting the music industry to include parental advisory labels on music with explicit content, they were also heavily mocked.
How quaint it all seems now. Today, instead of bristling at the condescension that they need protecting, college students themselves demand that warnings be placed on their book covers.
Jenny Jarvie writes in The New Republic, “The trigger warning signals not only the growing precautionary approach to words and ideas in the university, but a wider cultural hypersensitivity to harm and a paranoia about giving offense.”
“Hypersensitivity” is exactly right. In their heart of hearts, I doubt these students would value the imposition of their proposed trigger warnings a fraction as much as what is driving them to demand them in the first place: the chance to smugly perform some first-rate self-righteousness theater. Trigger warnings are useful only in that by calling for them, the complainant draws attention to what he or she views as an exquisitely developed hypersensitivity, which for a certain mindset now equates with sophistication and is in effect the highest virtue there is.
As Jarvie goes on to note, the Indignation Treadmill kicked in again in no time, with trigger warnings garnering plenty of ridicule (Jonah Goldberg in the L.A. Times: “Trigger warning: I am going to make fun of trigger warnings”), and the Thought Police responding with more outrage at the mockers’ audacity.
I suppose that by writing this I can be accused of being part of the Indignation Treadmill. It’s a fair point, though I don’t feel outraged. I would also never dream of silencing other people’s criticisms. Besides, in addition to being one of the most pleasing emotions individually, collective outrage can have tremendous value in effecting real and worthy societal change.
It’s just that some sense of proportion would be nice. We all can’t attach the same value to everything, and the world would be intolerably boring if we did, but we can reserve our outrage for things that deserve it. Surely it’s all too easy to think of examples. (To nominate just one for which outrage seems entirely appropriate: elephants are on pace to be slaughtered to extinction within my lifetime.)
An app called Yo (or seeing people making fun of Yo) shouldn’t be enough to affect us emotionally. Facebook doesn’t owe us anything out of the goodness of its algorithmic heart. And if any college student finds that Hamlet makes them not just uncomfortable but so uncomfortable that they feel it appropriate to protest it rather than calmly return it to the shelf, maybe that student is not ready for university — or the internet.
Posted on July 2, 2014.