Elon Musk has failed essential tests for moderating objectionable content on X, better known as Twitter. We’ve known this for some time now, but it was glaringly obvious after Hamas’ attack on Israel last weekend and during Israel’s subsequent and ongoing military response. Horrifically violent images — some real, some faked and some mislabeled from entirely different years and places — have been boosted by the platform’s blue-check system, which gives priority to content posted by anyone willing to pay the $8 monthly subscription.

That disinformation about the conflict has overtaken the site is a natural consequence of Musk’s decision to gut the company’s Trust and Safety team, which currently has no leader, and limited  avenues for users to flag content that violates what few content-moderation rules remain. With the absence of teams in place to review possibly violative content, posts are very frequently left unchecked with no analysis of their network effect. Musk’s decision to give special prominence to subscribers’ posts — without adequately verifying users’ identities — has incentivized all sorts of grifters, conspiracy theorists and propagandists to drown the platform in lies about the conflict.   

Within hours of the attack, videos of Hamas shooting people and destroying buildings and homes flooded the platform. In one instance, video content was taken from a video game. Some Twitter users, including those with blue checks, posted images from other conflicts, in other parts of the world, and claimed they were taken in Israel or the Gaza Strip. Others exploited the crisis to further their own political agendas, falsely claiming the Biden administration helped bankroll the Hamas attack.

The last several days have been nothing short of terrifying. In the frenzy to share updates on the crisis, users are amplifying an unprecedented number of videos, images and news articles without vetting them for accuracy, making it extremely difficult for the public to separate fact from fiction. With a half billion monthly active users – many of whom are seeking speedy and reliable information during the crisis, only to be met with a deluge of disinformation – the platform’s ability to confront and mitigate these forces is all the more critical.

Whether it’s in times of crisis or calm, social media has typically been a source of real-time information for users. And people should be able to rely on the platforms they use to provide accurate and even lifesaving information. Big Tech companies that are weighing whether to follow Musk’s reckless path — by abandoning the kinds of content moderation that is even more essential during wartime — should look to Twitter as an example of what not to do. 

How Musk Broke Content Moderation on X (Twitter)

When Musk took over Twitter nearly a year ago, many suspected that his reckless decisions for the platform would affect real-world events and undermine users’ ability to get reliable information during crises. There were reasons for concern right away: Use of the N-word surged immediately after Musk’s purchase last October, allowing bad actors to test the limits of the platform’s moderation systems.  

One of Musk’s changes to Twitter is to pay a bounty to subscribers who generate ”views” for advertisers and engage others on the platform. At the same time, he repealed many of the platform policies that prevented people from spreading clickbait and disinformation. The end result is a social-media network that apparently pays some of these imposters for amplifying lies and violent imagery.

In addition to laying off most of the staff charged with vetting disinformation, Musk has pushed the burden of fact checking onto Twitter users. Community Notes, a Musk-favored tool that allows platform users to provide context to inaccurate posts, has been completely overwhelmed since the attacks began in Israel. Twitter claimed on Monday that there were more than 50 million posts about the conflict, including those spreading disinformation. The sheer volume of fake reports stretched well beyond the reach of any user-powered fact checking.

The European Union’s industry chief, Thierry Breton, told Musk to provide evidence that Twitter was addressing the rampant spread of disinformation on the platform — including content posted by terrorist groups — in accordance with new EU online content rules. “I therefore invite you to urgently ensure that your systems are effective and report on the crisis measures taken to my team,” Breton said in a letter on Tuesday. 

Twitter CEO Linda Yaccarino responded to Breton’s inquiry with details that the platform had removed or labeled “tens of thousands” of posts — just a drop in the bucket given the glut of disinformation and inauthentic content. Most importantly, there is no sign that Twitter has implemented any content-moderation rules to specifically cope with this crisis nor brought back critical content moderators to mitigate the deluge of harmful posts. A platform company that fails to comply with EU requirements could be fined up to six percent of its annual global revenue — with repeat offenders facing the prospect of a complete ban from operating in Europe.

Content Moderation Matters

All of this is to say that content moderation matters. It matters for the platform itself in several ways. First, the failure to vet and remove violative content harms and alienates users. Second, it also affects major brands, hundreds of which have pulled their advertising since Musk took over, resulting in a more-than-50-percent drop in ad revenues. And third, it exposes a platform like Twitter to billions of dollars in potential fines, compounding the company’s already dire prospects as a business. 

But content moderation on platforms like X also matters for reasons that extend out beyond the platform itself. Failure to moderate content inevitably leads to migration of platform lies and toxicity to mainstream media. Already, news outlets like CNN, The Los Angeles Times, and others are retracting poorly vetted coverage of stories which originated and went viral on social media.

Unfortunately, it’s not just Twitter that’s failing to moderate. Other platforms have rolled back their own health and safety teams, leading to thousands of layoffs at companies like Meta and YouTube. With fewer people to moderate and review content, violative posts go unchecked. On Meta-owned Facebook, for example, some users have mislabeled a video of an Israeli airstrike on the Gaza strip in May 2021 and claimed it was a Hamas retaliatory strike from Oct. 7. On Alphabet-owned YouTube, a video of children in cages claims that Hamas fighters were holding them hostage. It has been viewed millions of times. Versions of it were posted weeks before the attack and researchers say the footage actually came from Afghanistan, Syria and Yemen. 

Twitter’s Failings are a Warning Sign for Other Platforms

Despite these missteps at other platforms, Twitter continues to exemplify the grand failures and disastrous human implications that come from abandoned platform integrity commitments. The company’s pervasive moderation failures, and the threats these pose to Twitter’s financial future, should serve as a wake-up call for other platforms. Should the EU impose massive fines against the platform, Twitter could edge even closer to bankruptcy, incapable of making interest payments on the $13-billion debt Musk incurred when he bought the platform.

Twitter’s moderation failures during this conflict are a sobering lesson to other major platforms. U.S. courts — and GOP lawmakers’ attacks on independent researchers — have removed checks on platforms’ behavior. Musk himself has sued and threatened to sue various groups that are tracking hate and disinformation on the site. Together, this creates a climate of fear that chills experts’ attempts to hold platforms accountable.

The Israel-Hamas conflict is a horrific example of why we so desperately need better moderation and tech executives who put platform integrity over profits. The current conflict is a case in point and the reason civil- and human-rights groups have been calling for stronger vetting by platforms year-round. This is why we need virality reports and network analysis from platforms to help illuminate vulnerabilities that bad actors could exploit.

Disinformation has always been weaponized to exacerbate the fog of war. The content-moderation failures at Twitter are already causing harm to those across the region. Musk doesn’t seem to care enough to fix things, and make Twitter a better tool for spreading credible information. This crisis is an urgent invitation for tech platforms to do more, not less, to protect users and democracies.

IMAGE:  Photo by Matt Cardy/Getty Images