Once again, we’re debating about “platforming Nazis,” following the publication of an article in The Atlantic titled ” Substack Has a Nazi Problem” and a campaign by some Substack writers to see some offensive accounts given the boot. And once again, the side calling for more content suppression is short-sighted and wrong.
This is far from the first time we’ve been here. It seems every big social media platform has been pressured to ban bigoted or otherwise offensive accounts. And Substackeveryone’s favorite platform for pretending like it’s 2005 and we’re all bloggers againhas already come under fire multiple times for its moderation policies (or lack thereof). Substack vs. Social Media
Substack differs from blogging systems of yore in some key ways: It’s set up primarily for emailed content (largely newsletters but also podcasts and videos), it has paid some writers directly at times, and it provides an easy way for any creator to monetize content by soliciting fees directly from their audience rather than running ads. But it’s similar to predecessors like WordPress and Blogger in some key ways, alsoand more similar to such platforms than to social media sites such as Instagram or X (formerly Twitter). For instance, unlike on algorithm-driven social media platforms, Substack readers opt into receiving posts from specific creators, are guaranteed to get emailed those posts, and will not receive random content to which they didn’t subscribe.
Substack is also similar to old-school blogging platforms in that it’s less heavy-handed with moderation. On the likes of Facebook, X, and other social media platforms, there are tons of rules about what kinds of things you are and aren’t allowed to post and elaborate systems for reporting and moderating possibly verboten content.
Substack has some rules , but they’re pretty broadnothing illegal, no inciting violence, no plagiarism, no spam, and no porn (nonpornographic nudity is OK, however).
Substack’s somewhat more laissez faire attitude toward moderation irks people who think every tech company should be in the business of deciding which viewpoints are worth hearing, which businesses should exist, and which groups should be allowed to speak online. To this censorial crew, tech companies shouldn’t be neutral providers of services like web hosting, newsletter management, or payment processing. Rather, they must evaluate the moral worth of every single customer or user and deny services to those found lacking. Nazis, Nazis, Everywhere
Uh, pretty easy just not to do business with Nazis, some might say. Which is actually… not true. At least not in 2023. Because while the term “Nazi” might have a fixed historical meaning, it’s bandied about pretty broadly these days. It gets used to describe people who (thankfully) aren’t actually antisemitic or advocating for any sort of ethnic cleansing. Donald Trump and his supporters get called Nazis. The folks at Planned Parenthood get called Nazis. People who don’t support Israel get called Nazis. All sorts of people get called Nazis for all sorts of reasons. Are tech companies supposed to bar all these people? And how much time should they put into investigating whether people are actual Nazis or just, like, Nazis by hyperbole? In the end, “not doing business with Nazis” would require a significant time investment and a lot of subjective judgment calls.
Uh, pretty easy just not to do business with people who might be mistaken for Nazis, some might counter. Perhaps. In theory. But in practice, we again run into the fact that the term is ridiculously overused. In practice, it would be more like “not doing business with anyone who anyone describes as a Nazi”a much wider groupor devoting a lot of the business to content moderation.
OK, but you can have toxic views even if you’re not literally a Nazi. Of course. But you have to admit that what we’re talking about now is no longer ” doing business with Nazis .” It’s about doing business with anyone who holds bigoted views, offensive views, views that aren’t progressive, etc. That’s a much, much wider pool of people, requiring many more borderline judgment calls.
This doesn’t stop at Nazis, the Nazi-adjacent, and those with genuinely horrific ideas. Again, we’re going to run into the fact that sometimes people stating relatively commonplace viewpointsthat we need to deport more immigrants, for example, or that Israel shouldn’t exist, or that sex-selective abortions should be allowed, or whateverare going to get looped in. Even if you abhor these viewpoints, they hardly seem like the kind of thing that shouldn’t be allowed to exist on popular platforms. Slippery Slopes and Streisand Effects
Maybe you disagree with me here. Maybe you think anyone with even remotely bad opinions (as judged by you) should be banned. That’s an all too common position, frankly.
In Substack’s case, some of the “Nazis” in question really may beor at least revereactual Nazis. “At least 16 of the newsletters that I reviewed have overt Nazi symbols, including the swastika and the sonnenrad, in their logos or in prominent graphics,” Jonathan M. Katz wrote in The Atlantic last month.
But you needn’t have sympathy for Nazis and other bigots to find restricting speech bad policy.
Here’s the thing: Once you start saying tech companies must make judgment calls based not just on countering illegal content but also on countering Bad Content, it opens the door to wanna-be censors of all sorts. Just look at how every time a social media platform expands its content moderation purview, a lot of the same folks who pushed for itor at least those on the same side as those who pushed for itwind up caught in its dragnet. Anything related to sex work will be one of the first targets, followed quickly by LGBT issues. Probably also anyone with not-so-nice opinions of cops. Those advocating ways around abortion bans. And so on. It’s been all too easy for the enemies of equality, social justice, and criminal justice reform to frame all of these things as harmful or dangerous. And once a tech company has caved to being the safety and morality arbiter generally, it’s a lot easier for them to get involved again and again for lighter and lighter reasons.
Here’s the other thing: Nazis don’t magically become not-Nazis just because their content gets restricted or they get kicked off a particular platform. They simply congregate in private messaging groups or more remote corners of the internet instead. This makes it more difficult to keep tabs on them and to counter them. Getting kicked off platform after platform can also embolden those espousing these ideologies and their supporters, lending credence to their mythologies about being brave and persecuted truth-tellers and perhaps strengthening affinity among those otherwise loosely engaged.
There’s also the ” Streisand effec t” (so named after Barbra Streisand’s attempt to suppress a picture of the cliffside outside her house only drew enormous attention to a picture that would otherwise have been little seen). The fact that Nazi accounts may exist on Substack doesn’t mean many people are reading them, nor does it mean that non-Nazis are being exposed to them. You know what is exposing usand, alas, perhaps some sympathetic types, tooto these newsletters? The Atlantic article and the Substackers Against Nazis group continuing to draw attention to these accounts. Substack’s Ethos
In their open letter, Substackers Against Nazis don’t explicitly call for any particular accounts to be banned. They’re just “asking a very simple question…:Why are you platforming and monetizing Nazis?” But the implication of the letter is that Substack should change its policy or the writers in question will walk. “This issue has already led to the announced departures of several prominent Substackers,” the letter reads. “Is platforming Nazis part of your vision of success? Let us knowfrom there we can each decide if this is still where we want to be.”
Substack executives haven’t publicly responded to critics this time. But thy have laid out their moderation vision before, and it’s commendable.
“In most cases, we don’t think that censoring content is helpful, and in fact it often backfires,” Substack co-founders Chris Best, Hamish McKenzie, and Jairaj Sethi wrote in 2020, in response to calls for them to exclude relatively mainstream but nonprogressive voices. “Heavy-handed censorship can draw more attention to content than it otherwise would have enjoyed, and at the same time it can give the content creators a martyr complex that they can trade off for future gain.” They go on to reject those who would have Substack moderators serve as “moral police” and suggest that those who want “Substack but with more controls on speech” migrate to such a platform.
“There will always be many writers on Substack with whom we strongly disagree, and we will err on the side of respecting their right to express themselves, and readers’ right to decide for themselves what to read,” they wrote.
If the accounts Katz identified are making “credible threats of physical harm,” then they are in violation of Substack’s terms of service. If they’re merely spouting racist nonsense, then folks are free to ignore them, condemn them, or counter their words with their own. And they’re certainly free to stop writing on or reading Substack.
But if Substack’s past comments are any indication, the company won’t ban people for racist nonsense alone. Keep Substack Decentralized
Plenty of (non-Nazi) Substack writers support this stance. “Substack shouldn’t decide what we read,” asserts Elle Griffin. “We should.” Griffin opposes the coalition aiming to make Substack “act more like other social media platforms.” Her post was co-signed by dozens of Substackers (and a whole lot more signed on after publication), including Edward Snowden, Richard Dawkins, Bari Weiss, Greg Lukianoff, Bridget Phetasy, Freddie deBoer, Meghan Daum, and Michael Moynihan.
“I, and the writers who have signed this post, are among those who hope Substack will not change its stance on freedom of expression, even against pressure to do so,” writes Griffin.
Their letter brings up another reason to oppose this pressure: It doesn’t work to accomplish its ostensible goal. It just ends up an endless game of Whac-A-Mole that simultaneously doesn’t rid a platform of noxious voices while leading to the deplatforming of other content based on private and political agendas.
They also note that it’s extremely difficult to encounter extremist content on Substack if you don’t go looking for it:
The author of the recent Atlantic piece gave one way: actively go searching for it. He admits to finding “white-supremacist, neo-Confederate, and explicitly Nazi newsletters” by conducting a “search of the Substack website and of extremist Telegram channels.” But this only proves my point: If you want to find hate content on Substack, you have to go huntin g for it on extremist third-party chat channels, because unlike other social media platforms, on Substack it won’t just show up in your feed.
And they point out that (as on blogs of yore) individual creators can moderate content as they see fit on their own accounts. So a newsletter writer can choose to allow or not to allow comments, can set their own commenting policies, and can delete comments at their own discretion. Some can opt to be safe spaces, some can opt to be free-for-alls, and some for a stance in between.
I’m with Griffin and company here. Substack has nothing to gain from going the way of Facebook, X, et al.and the colossal drama those platforms have spawned and the mess they’ve become proves it. Substack is right to keep ignoring both the Nazis and those calling to kick them out.