The idea is appealing in its simplicity: Social networks should only take down posts if they violate the law. Otherwise, they should remain.
What a racist massacre tells us about free speech online
This past weekend, Twitter and other major platforms were once again scrambling to take down posts and videos that were legal under the First Amendment but violated their policies. In this case, the videos showed a gunman, allegedly an 18-year-old white supremacist, slaughtering 10 people in a grocery store in a predominantly Black neighborhood of Buffalo. And the posts included the suspect’s racist screed, for which he seems to have intended the massacre to serve as an advertisement.
The Buffalo shooting video throws into stark relief the stakes involved in what too often feels like an abstract debate over online discourse and free speech.
Musk’s past statements would seem to imply that, if he were in charge, Twitter would have let the videos and manifesto circulate, at least in the United States. After all, hate speech and depictions of graphic violence are not against the law here.
But Musk has been silent on the shooting, even as he has continued to tweet prolifically on other Twitter-related topics. Asked by The Washington Post via email whether he believed Twitter was wrong to remove videos of the shooting, he did not respond.
Social media’s role in the Buffalo mass shooting was not trivial. While the attack took place in the physical world, it was planned online, influenced by ideas that spread online, live-streamed online and motivated in part by the gunman’s apparent belief that his words and deeds would ultimately be shared by millions online. In that respect, it was modeled on the 2019 massacre in Christchurch, New Zealand, which the perpetrator live-streamed on Facebook.
In Buffalo, the gunman apparently opted to live-stream his attack on Twitch rather than Facebook in part because he knew Facebook had responded to Christchurch by improving its ability to quickly detect and shut down violent live streams. As it turned out, Twitch also acted quickly to take down his video — but not quickly enough to prevent someone from recording it, uploading it elsewhere, and then sharing links to it on Facebook, Twitter, and numerous other sites. (Twitch belongs to Amazon, whose founder Jeff Bezos owns The Washington Post.)
The Buffalo shooting video, and the suspect’s writings, remained findable online despite efforts by Facebook, Twitter and other big platforms to remove it, thanks in part to smaller, niche sites with looser content moderation. But those efforts dramatically reduced the number of people confronted by the graphic violence and bigoted propaganda in their feeds. (Both Facebook and Twitter removed the video and manifesto under policies they’ve designed specifically for violent attacks.)
In their earlier years, Facebook, YouTube and especially Twitter cast themselves idealistically as guardians of free expression around the world. This idealism seemed to dovetail neatly with their business model, allowing a relatively small cadre of engineers and designers to build systems that could host vast amounts of content without also requiring vast numbers of humans to review what users were posting.
Over the years, however, Facebook, Twitter, YouTube and others learned the hard way that in the absence of rules or enforcement, their products would not only play host to the worst of humanity, but systematically elevate it, thanks to algorithms and human social dynamics that tend to prioritize the most shocking, attention-grabbing ideas and imagery.
The hazard isn’t just moral: Without moderation, users’ feeds would be constantly exposing them to posts they find offensive, insulting or just plain gross, and many would eventually leave. And so the need for tech companies to devote both artificial intelligence software and teams of human reviewers to detecting and taking down everything from pornography to scams to graphic violence became obvious.
In the view of Musk and a growing number of conservatives, however, the platforms have gone too far. They see a liberal bias in both the rules that the tech companies have set out and in how they enforce them. While these critics tend to support certain categories of content moderation, including efforts to prevent spam and bots, they’re upset by those that seem to have a political dimension, such as policies against misinformation and hate speech.
One response has been for conservatives to start their own social networks. Upstarts such as Rumble, Parler, Gab, and former president Donald Trump’s Truth Social have sprung up as alternatives to the big platforms, promising “free speech” for users. In practice, all have quickly found that an absence of moderation is disastrous, and many have adopted rules that look a lot like the ones they were trying to rebel against. So far, none has caught on with the mainstream.
Now there’s a push by conservatives and libertarians to force their visions of unfettered speech onto the established platforms — whether by regulating them or, in Musk’s case, trying to buy them.
A law that took effect in Texas last week makes it illegal for the largest social platforms to discriminate based on a user’s “viewpoint,” and other states are considering similar laws. The Texas attorney general’s office did not respond to a request for comment on whether Texans who posted the Buffalo shooter’s propaganda could sue tech companies under the law for taking it down.
Meanwhile, Musk has said that he believes “free speech” on social media is “that which matches the law,” and that moderating legal speech would be “contrary to the will of the people.”
Of course, the law is different in every country. In Russia, complying with the law would mean banning users from calling the war in Ukraine a war — a policy far more restrictive than Twitter’s existing stance. In fact, Twitter has been largely blocked in Russia for refusing to comply with the government’s censorship demands.
In the United States, however, the First Amendment protects a tremendous range of speech from government censorship. Constitutional scholars say that includes not only many types of spam, pornography, and misinformation, but hate speech and depictions of graphic violence. Which means that it is almost certainly legal to post online the Buffalo shooter’s grisly video, and probably also his virulently racist manifesto, depending on the context.
Whether one ought to post it is a different question — “an ethical one, not a legal one,” said Jameel Jaffer, director of the Knight First Amendment Institute at Columbia University. And so is the question of whether platforms, which are private companies with their own First Amendment rights, ought to allow it on their platforms.
For the tech companies, one ethical argument against allowing the shooter’s video and manifesto to spread is that many users will no doubt find it upsetting or offensive. An even stronger one might be that, as the shooter himself acknowledged, the ability to spread his message far and wide was part of the motivation for the attack in the first place. So for platforms to host it risks not only amplifying the harm wrought in Buffalo, but tacitly incentivizing the next mass shooter.
Whether Musk himself has fully considered the implications of his own philosophy is unclear. He seemed definitive in his view that Twitter should allow most speech unless it violates the law. But soon after, in criticizing the site’s permanent suspension of Trump, Musk said that tweets that are “wrong or bad” should be “deleted or made invisible.” He did not clarify how that would square with his free speech absolutism.
The reality is that Big Tech companies, liberals, Musk, and conservatives all generally support freedom of speech. They simply disagree on where to draw the boundaries of what’s acceptable on large, public forums.