Substack says it will remove Nazi publications from the platform
Nazi content violates rules against incitement to violence, the company says
Substack is removing some publications that express support for Nazis, the company said today. The company said this did not represent a reversal of its previous stance, but rather the result of reconsidering how it interprets its existing policies.
As part of the move, the company is also terminating the accounts of several publications that endorse Nazi ideology and that Platformer flagged to the company for review last week.
The company will not change the text of its content policy, it says, and its new policy interpretation will not include proactively removing content related to neo-Nazis and far-right extremism. But Substack will continue to remove any material that includes “credible threats of physical harm,” it said.
In a statement, Substack’s co-founders told Platformer:
If and when we become aware of other content that violates our guidelines, we will take appropriate action.
Relatedly, we’ve heard your feedback about Substack’s content moderation approach, and we understand your concerns and those of some other writers on the platform. We sincerely regret how this controversy has affected writers on Substack.
We appreciate the input from everyone. Writers are the backbone of Substack and we take this feedback very seriously. We are actively working on more reporting tools that can be used to flag content that potentially violates our guidelines, and we will continue working on tools for user moderation so Substack users can set and refine the terms of their own experience on the platform.
Substack’s statement comes after weeks of controversy related to the company’s mostly laissez-faire approach to content moderation.
In November, Jonathan M. Katz published an article in The Atlantic titled “Substack Has a Nazi Problem.” In it, he reported that he had identified at least 16 newsletters that depicted overt Nazi symbols, and dozens more devoted to far-right extremism.
Last month, 247 Substack writers issued an open letter asking the company to clarify its policies. The company responded on December 21, when Substack co-founder published a blog post arguing that “censorship” of Nazi publications would only make extremism worse.
McKenzie also wrote that “we don’t like Nazis either” and said Substack wished “no-one held those views.” But “we don't think that censorship (including through demonetizing publications) makes the problem go away,” he wrote. “In fact, it makes it worse. We believe that supporting individual rights and civil liberties while subjecting ideas to open discourse is the best way to strip bad ideas of their power.”
The statement seemed to be at odds with Substack’s published content guidelines, which state that “Substack cannot be used to publish content or fund initiatives that incite violence based on protected classes.”
In its aftermath, several publications left the platform. Others, including Platformer, said they would leave if the company did not remove pro-Nazi publications.
Meanwhile, more than 100 other Substack writers, including prominent names like Bari Weiss and Richard Dawkins, signed a post from writer Elle Griffin calling on Substack to continue with its mostly hands-off approach to platform-level moderation.
From its inception, McKenzie and Substack co-founder Chris Best have touted freedom of speech as one of Substack’s core virtues. As a result, the platform has been embraced by fringe thinkers, who have built large businesses while promoting anti-vaccine pseudo-science, Covid conspiracy theories and other material that is generally restricted on mainstream social networks.
Substack has defended its approach by arguing that it is built differently from social networks, which optimize for engagement rather than subscription revenue. The company says it employs a “decentralized” approach to moderation that allows individual readers to decide which writers they want to subscribe to; and lets writers determine which comments they will allow and which blogs they will recommend.
(Incidentally, this approach means that you can’t currently report comments directly to Substack: only writers receive your reports. Platformer has reviewed several cases of violent material and death threats in Substack comments.)
At the same time, over the past couple years Substack has come to more closely resemble the social networks it often criticizes. Each week, Substack sends users a personalized, algorithmically ranked digest of posts from writers they don’t yet follow — a feature that can help fringe publications build larger audiences and make more money than they would otherwise.
And last year Substack launched Notes, a text-based social feed similar to Twitter that also surfaces personalized content in a ranked feed. Notes can also give heightened visibility and free promotion to extremists.
The question now is whether taking action against some pro-Nazi accounts will shift the perception that Substack is a home for the most extreme ideologies, and prevent an exodus among writers who prefer more aggressive content moderation.
In recent weeks, Platformer has worked with other journalists and extremism researchers in an effort to understand the scope of far-right content on the platform. We’ve now reviewed dozens of active, monetized publications that advance violent ideologies, including anti-Semitism and the great replacement theory.
Substack has argued that extremist publications represent only a small fraction of newsletters on the platform, and as far as we can tell this is true. At the same time, the site’s recommendations and social networking infrastructure is designed to enable individual publications to grow quickly. And the company’s outspoken embrace of fringe viewpoints all but ensures that the number of extremist publications on the platform will grow.
The company is now in a difficult position. Having branded itself as a bastion of free speech, any changes to its content policy risks driving away writers who chose the platform in part for its rejection of aggressive content moderation. At the same time, other publications — Platformer included — have lost scores of paying customers who do not want to contribute to a platform that they see as advancing the cause of extremism.
In coming days, explicitly Nazi publications on Substack are slated to disappear. But the greater divide within its user base over content moderation will remain. The next time the company has a content moderation controversy — and it will — expect these tensions to surface again.
What this means for Platformer
Substack’s removal of Nazi publications resolves the primary concern we identified here last week. At the same time, as noted above, this issue has raised concerns that go beyond the small group of publications that violate the company’s existing policy guidelines.
As we think through our next steps, we want to hear from you. If you have unsubscribed from Platformer or other publications over the Nazi issue, does the company’s new stance resolve your concerns? Or would it take more? If so, what?
Paid subscribers can comment below; everyone is welcome to email us with their thoughts.
Sponsored
Investors are focused on these metrics.
Startups should take notice.
It takes more than a great idea to make your ambitions real. That’s why Mercury goes beyond banking* to share the knowledge and network startups need to succeed. In this article, they shed light on the key metrics investors have their sights set on right now.
Even in today’s challenging market, investments in early-stage startups are still being made. That’s because VCs and investors haven’t stopped looking for opportunities — they’ve simply shifted what they are searching for. By understanding investors’ key metrics, early-stage startups can laser-focus their next investor pitch to land the funding necessary to take their company to the next stage.
Read the full article to learn how investors think and how you can lean into these numbers today.
*Mercury is a financial technology company, not a bank. Banking services provided by Choice Financial Group and Evolve Bank & Trust®; Members FDIC.
Platformer has been a Mercury customer since 2020. This sponsorship gets us 5% closer to our goal of hiring a reporter in 2024.
Governing
The US Department of Justice is in the late stages of investigating Apple’s strategies to protect its iPhone dominance. A “sweeping” antitrust case may be forthcoming. (David McCabe and Tripp Mickle / The New York Times)
YouTube updated its harassment policies to prohibit deepfakes that depict minors or victims of crimes that are describing the violence they experienced. Catching up to a disturbing trend on video platforms. (Mia Sato / The Verge)
OpenAI published its rebuttal to the New York Times lawsuit, saying the paper’s claims “without merit.” They go particularly hard on claims that ChatGPT routinely regurgitates copyrighted content, calling it a rare bug. (OpenAI)
Arizona is using AI to test some worst-case Election Day scenarios in an attempt to dispel AI-generated misinformation. Here’s wishing AI luck in its fight against AI. (NPR Morning Edition)
The Supreme Court rejected a request by X to consider whether the platform can reveal how often federal agents ask for user information as part of their national security investigations. (Nate Raymond / Reuters)
The big three cloud computing providers — Amazon, Microsoft and Google — offer only narrow legal indemnities against AI copyright claims against their users, legal experts say. (Camilla Hodgson / Financial Times)
Also: AI image generators from companies like Midjourney and OpenAI should be limited to properly licensed content to avoid plagiarism, these authors argue. (Gary Marcus and Reid Southen / IEEE Spectrum)
European antitrust head Margarethe Vestager is set to meet with the chief executives of Apple, Alphabet, Broadcom and Nvidia this week, along with executives from OpenAI. (Foo Yun Chee / Reuters)
The European Commission is urging Google and other tech companies to promote stories by Belarusian journalists opposing the country’s regime, after complaints that pro-regime stories are being pushed by algorithms. (Raphael Minder / Financial Times)
Amazon, Alphabet and Microsoft are among the tech giants boosting their Saudi Arabian presence, after authorities there said they would stop giving contracts to companies that lack a regional headquarters. Gross! (Matthew Martin and Fahad Abuljadayel / Bloomberg)
Industry
Microsoft executive Dee Templeton has reportedly joined OpenAI’s board as a nonvoting observer. (Dina Bass and Rachel Metz / Bloomberg)
Threads is taking steps to address “low-quality recommendations”, according to Instagram head Adam Mosseri. The statement came after users began seeing lots of anti-LGBT posts in their recommendations. (Jay Peters / The Verge)
The Twitch “clips” feature — one of the least moderated aspects the site — is being used to record and share child sexual abuse material, an analysis found. (Cecilia D’Anastasio / Bloomberg)
How the desperate race for Google Search traffic shapes websites and impacts content, often at the expense of real people. An excellent look at how Google warps the entire shape of the web with their content guidelines. (Mia Sato / The Verge)
TikTok is partnering with Peloton to create the #TikTokFitness hub, which includes live Peloton classes, class clips and original instructor series. (Dean Seal / The Wall Street Journal)
X continues to have a crypto ad scam problem, with the number of malicious ads increasing rapidly over the past month, according to researchers. (Lawrence Abrams / Bleeping Computer)
Leaders at Elon Musk’s companies are reportedly concerned about Musk’s use of illegal drugs like LSD, cocaine and ecstasy. (Emily Glazer and Kirsten Grind / The Wall Street Journal)
Google seems to be working on Bard Advanced, an upgraded version of Bard that will be available through paid subscriptions to Google One. (Emma Roth / The Verge)
Apple’s Vision Pro VR headset is set to ship out to customers in February, with pre-orders beginning on Jan. 19. (Emma Roth / The Verge)
Spotify’s shift toward AI recommendations means that once-influential, human-curated playlists like Rap Caviar are losing their power. They’re being streamed less, too. (Ashley Carman / Bloomberg)
Midjourney v6, the newest image synthesis model by the AI image company, has massive improvements in detail and scenery, but its images still suffer from high contrast and saturation. Plus, it still scrapes artwork by artists without their explicit consent. (Benj Edwards / Ars Technica)
Carta’s CEO says the startup is investigating potential misuse of customer data, following allegations that it had used its knowledge of investors to promote a Linear share sale. The story everyone in Silicon Valley was talking about over teh weekend.(Anne VanderMey / Bloomberg)
AI hallucinations should be appreciated for their creativity, this author argues. They also highlight the importance of human fact-checking and decision-making. (Steven Levy / WIRED)
LLMs like ChatGPT perform better when assigned gender-neutral roles, researchers found. (Sisi Wei / The Markup)
Those good posts
For more good posts every day, follow Casey’s Instagram stories.
(Link)
(Link)
(Link)
Talk to us
Send us tips, comments, questions, and your feedback on Substack’s Nazi enforcement: [email protected] and [email protected].
Thanks for the relentless pursuit on this topic, Zoe and Casey. I wouldn't say it resolves it - especially as the comment issue goes in a circle that doesn't get escalated. To me, it seems as though their hands got caught in the cookie jar so they're belatedly making more of an effort. I sense this will get worse because the founders don't seem as though they want to be dealing with this side of the business they have started.
I have to be honest here, it feels like Substack is just reacting to the heat of the moment and will continue ignoring any new Nazi newsletters that pop up after the fact. I'm not sure if I personally want to continue providing revenue for a company that feels so insincere in their response.