Finding Excuses not to Listen
The constant threat of bad faith arguments makes us deaf to reason.
I don’t believe it is controversial to say that our ability to have reasonable and meaningful political conversations has largely broken down. Truisms such as Brandolini’s Law - or the Bullshit Asymmetry Principle - capture the difficulty of dealing with mis- dis- and malinformation online. Producing damaging information is essentially free, and its impact practically impossible to undo.
The book “Merchants of Doubt” laid out in detail how intentionally sowing doubt was used by tobacco and oil companies to forestall regulation that would hurt their profits. As a strategy, this was highly effective because the aim was never to build a democratic consensus towards achieving a positive goal, but to prevent one. However, the increasing effectiveness of this approach online has I believe harmed the ability of many people to engage rationally on any controversial topic.
I think the widespread division, hostility, dogmatism and conspiracism we are witnessing now are as much as anything a response to an information space that is not merely polluted, but actively antagonistic to truth and reason.
The lesson that has been learned through years of painful and futile engagement with disinformation online is that it is a waste of time, and wasting your time is the intent. Establishing in detail why something is misleading is time-consuming and largely ineffective, because few will ever read a lengthy, nuanced dissection, and even if they do many will simply come away confused or disengaged - which again is a win for those aiming to sow doubt. Meanwhile, whatever you are debunking will continue to be repeated regardless of your efforts. You can’t hope to respond to everything, and even if you do it won’t work.
What does work though is appealing to people’s negative emotions. The more quickly you can dismiss some piece of information as “bad” in some way, the more quickly you can take it out of the conversation and stop it from spreading. Don’t waste your own time on it in any more detail, play to the audience and let everyone else know that they shouldn’t either. The messaging strategies which are most effective are not those which provide the fairest, most balanced and exhaustive coverage of an issue, but the ones that more quickly render alternative viewpoints unacceptable.
That climate change disinformation you’re about to read? It is right-wing propaganda produced by the Heartland Institute. That opinion piece you’re sharing? It came from someone tied to right-wing, Koch-funded thinktanks like the Global Warming Policy Foundation, so whatever you do don’t read it.
And while these instances were invariably true, what has resulted is a sort of race to the bottom to create the most knee-jerk, unconscious and emotive rejection of what “the other guy” is saying, on every political and social issue, all supported by the toxic reward structures of social media.
This paper gives some interesting experimental insight into the way group dynamics affect the formation of beliefs and opinions:
[…] we identify two major attractors of opinion: (i) the expert effect, induced by the presence of a highly confident individual in the group, and (ii) the majority effect, caused by the presence of a critical mass of laypeople sharing similar opinions.
So the most important things for whether someone undecided ultimately comes to believe something to be true the majority view and the confident view.
Neither of which depend on something actually being true.
Social media has added some important elements that warp our perception of both of these factors, notably:
Social media platforms target information at people they they determine will engage with it. This means that if you are susceptible to being persuaded - whatever the reason, however true the information - you are more likely to see it.
The presence of social feedback for popularity (likes, retweets) creates an illusion of majority view. That is, if some piece of information is shown to us, along with a nice counter saying 200,000 people like it, then you are inclined to believe it is popular and thus accept it as “true”.
When someone expresses views that do turn out to be unpopular, the social cost is limitless. The platforms simply will not stop directing people to a controversial post, no matter what having thousands of people sending you abuse does to your psyche.
The social feedback for popularity creates a perverse incentive for performative declarations of authoritative-sounding views that will garner a large amount of positive interactions. So the two factors become self-reinforcing - something is confidently expressed because it is popular to do so, and popular because it is confidently expressed.
What all this means is that doubt and nuance come at a huge cost. That is, if someone questions your confident, popular statement on social media, then an even-more-confident statement that your interlocuter is wrong - or a bad person somehow - will likely garner more positive attention - and the popularity of your dismissal numerically demonstrates that it is correct. Doubling-down, attacking critics, dismissing evidence, blinkered polarisation - all unhealthy behaviours that have become the norm on social media platforms, and driven by innocuous seeming features, such as likes and retweets.
It also means there is huge advantage in being able to quickly signal that contrary opinions are really bad, because it prevents bystanders from risking reading them, discourages positive interaction, and encourages negative interaction.
On February 25th, The Skeptic published the following article by Aaron Rabinowitz:
Fears of creeping transhumanism give space for overt conspiracism in Gender Critical communities
Building on Aaron’s position (laid out in a long-form exchange with Andy Lewis) that the "Gender Critical” movement is a moral panic, this piece goes further and asserts that some of the underlying fears leave its members susceptible to - or culpable for - the spreading of conspiracy theories about billionaires secretly pushing a transhumanist “psy-op”.
Pointing out harmful conspiracy theories is certainly worthwhile, but this article goes further in accusing others - who are not directly spreading conspiracy theories - of laundering and mainstreaming them.
To add to every stigmatising label we’ve seen before in the toxic gender debate - TERF, transphobe, bigot, right-wing, far-right, anti-abortion, hateful, racist, dogwhistle etc - we now get “laundering conspiracy theories”.
Having established that a conspiracy is not just bad but also dangerous to contemplate for fear of getting swept up in it, then highlighting innocuous-seeming opinions as a “laundered conspiracy theory” acts as a means of rapidly dismissing those too. The broader and more vague the criteria for what constitutes “laundering”, the more effective it is at stigmatising the sharing of those opinions.
Which is a shame, because dogwhistles and coded language and plausible deniability and conspiracy theories are definitely serious things, but then so are unjustified smears. Establishing the difference takes time and nuance, which are crowded out in today’s information space.
And despite the protestations in the article summary, such vague accusations render the whole subject of corporate influence and financing in the third sector totally off-limits because it is so easy and effective to brand such talk a “laundered conspiracy theory”. How could you tell the difference between a critique of, say, The Guardian using funding from Open Society Foundations to run a series of trans-focused articles, and a laundered conspiracy theory that these were part of a “billionaire transhumanist psy-op”? No matter how far you tone down the language of the former, it could always be said that this was just an attempt to smuggle conspiracist elements into the conversation with the appearance of reasonability. You simply can’t account for the fact that the whole point of smuggling hate and conspiracy in coded language is that it is supposed to be indistinguishable from good faith and honest intent. The more reasonable the language, the more effort has clearly gone in to obfuscate the conspiracism.
Which is itself textbook conspiracist thinking.
Since the publication of the article, Aaron has used this accusation to smear ideological critics in exactly the way other terms (like transphobe, TERF, etc) have been used. For example, in the exchange below, we see:
Defamation of a woman who tragically died in 2019 and isn’t here to defend herself from these smears, who one time in 2017 shared a Washington Times article, tweeting the title of that article
An accusation of laundering a conspiracy on that basis, using what at first glance appears to be a quote from her, but is actually a quote from someone else describing it in the worst possible way
A criticism of the person he is actually responding to for having quoted someone he is now accusing of laundering conspiracism
A challenge to all “GC folks” to do likewise
If you can be said (however tenuously) to have done one bad thing once, you are bad forever, and anyone who quotes you is bad, and anyone who won’t disown you is bad, and so on and so on. No-one can defend themselves against this without traversing multiple levels of vague guilt-by-association charges and challenging root accusations of conspiracism levelled against 3rd parties which may or may not be arguable. But all of that is futile anyway, since any attempt to do so just looks like “laundering conspiracism”, so why bother? It ceases to be a charge worth taking seriously.
On the basis that this sort of ideological puritanism is so divisive and dehumanising, I reject such vague claims of laundered conspiracism as overbroad and counterproductive. What this approach ultimately achieves, rather than addressing a potentially serious issue, is to poison the well for genuine critique and make people resistant to calling out conspiracism when they see it.
And frankly, if Kevin Bacon turns out to be a conspiracy theorist, we are all doomed by this standard.