Imagine checking the mail the weekend before Election Day 2020 and finding an official-looking letter telling you not to vote on Tuesday, Nov. 3, but on Wednesday instead. You ask your neighbors, and they got the same letter, and you’re all wondering what to do.

Fortunately, you call your local board of elections, and they tell you it’s not true. They don’t know who sent it, but they’re sure it wasn’t them. But you – and elections officials, for that matter — have no way of knowing how many people in your community got the same letter and might be tricked out of their right to vote.

Voter suppression tactics like this are as old as American democracy. But new technology has catapulted this threat to the next level, arming foreign governments and domestic disinformation campaigns with communications weaponry to spread deception to millions of voters from simple home computers (or from an overseas troll farm), before anyone even knows what’s happened, much less how to counteract it.

Common Cause, as part of the nonpartisan Election Protection coalition, has run an election-year social media monitoring and rapid-response operation since 2016. We’ve witnessed a significant increase each year in the variety and volume of voter suppression content online.

Most readers are likely familiar with the coordinated disinformation campaign by the Russian military intelligence agency G.R.U. in the 2016 elections, in which the operation created inauthentic and divisive social media content and microtargeted ads. Hired trolls wrote incendiary comments and created fake posts on political forums, organized fake “activist campaigns,” and created opposing events at the same times and locations. This is one flank of the disinformation attack on American voters: using political polarization to drive up tensions and rancor in the electorate.

But voter suppression content is as much, if not more, dangerous.

In 2016, multiple bad actors used social media to deceive, confuse, or intimidate voters. The most infamous was a series of images on Twitter encouraging African Americans to “Text your vote for Hillary,” falsely implying that they could vote by text message. In 2018, Common Cause found false reports of Immigration and Customs Enforcement (ICE) officers patrolling polling locations, as well as deceptive information that, if followed, would have disenfranchised voters, and the “doxxing” (the releasing of personal information) of election administrators.

As recently as the November 2019 elections in Kentucky, Pennsylvania, Virginia, and other states, content on social media platforms threatened to suppress voting. Posts on Facebook and Twitter told voters the wrong day for the election, racking up thousands of views. That kind of tactic might be easy for a longtime voter to disregard, but new and infrequent voters are particularly at risk when malicious disinformation gets free rein. Even voters who know better will get the sense that voting is confusing, difficult, and hopelessly partisan, and that could be a tipping point that keeps an eligible voter at home on election day.

With the first primary votes for the high-stakes 2020 presidential election just around the corner (after this week’s Iowa caucuses, the first primary, in New Hampshire, is on Feb. 11), Common Cause expects an even more sophisticated and dangerous disinformation operation, potentially disenfranchising millions of first-time voters.

Social Media Platforms’ Responses Fall Way Short

Many of the most-used social media platforms have policies against this kind of disinformation about voting (and some have extended that to prohibit disinformation about the 2020 Census). But even with these rules in place, we consistently find this content available to the public. We report it to the platforms when we see it, but it’s entirely up to them whether they decide to remove the post. In fact, Facebook’s official policy is to allow politicians to lie in their ads, essentially deciding intentionally to let deceptive content run on their sites.

Twitter recently announced a new “tool” to report voter suppression content. This is a step in the right direction, but a small one. Now, when you report a tweet, you can select that it is misleading about an election. But it takes someone who knows that this is false to flag it – and then it takes time for Twitter to review it and decide if they will remove the content, which gives time for networks to amplify false messages.

For example, Ann Coulter’s tweet before the November 2018 midterms told “Conservatives” to vote Nov. 6 (the actual election day) and “Liberals” to vote Nov. 7 (the day after the election). This tweet racked up at least 3,500 retweets and 13,000 “likes” before it was removed. That means it may have been seen by hundreds of thousands of people. And Coulter fans who saw that tweet were given a signal that it is OK to create this kind of potentially damaging content.

The fact that Twitter is (slightly) stepping up its efforts to combat voter suppression is a reflection of the growing concern that such content will be an even larger problem in the 2020 elections than in 2018 or 2016.

But it’s not enough for social media platforms to simply rely on third-party groups and their own users to notify them of disinformation. The platforms must develop better tools to quickly identify and remove deceptive content that could disenfranchise voters. Doing the right thing also makes good business sense: users will abandon these platforms if their social media feeds continue to be polluted with bots promoting conspiracy theories, inauthentic accounts, and misinformation.

To begin to solve this problem, the social media platforms must take more effective action to create transparency about who is sharing information and to ensure that the information shared comes from authentic sources. Fake profiles and a lack of transparency mean any avatar can disguise a partisan operative, a foreign agent, or a lone prankster behind the deluge of bad information we see every Election Day.

There are plenty of bad actors who have something to gain by disrupting American elections or frustrating voters. Sophisticated groups can run disinformation campaigns using troll farms and internet bots to spread deceptive information. Malicious users can also exploit social media tools by micro-targeting false content to vulnerable communities or using algorithms to amplify hateful rhetoric designed to intimidate voters. The more resources they’re able to marshal behind these disinformation campaigns, the harder they are to detect, identify, and neutralize.

Congress and Users Can Take Action

Congress also has a role to play, and there are at least two legislative opportunities to make social media a safer place for voters. The SHIELD Act, which passed the U.S. House of Representatives in October 2019, would prohibit deceptive practices and prevent voter intimidation by making it unlawful to knowingly provide false information, online or offline, about the time or place of voting or about the qualifications for voting, in order to prevent people from voting. The Department of Justice (DOJ) would be responsible for prosecuting violators. The bipartisan Honest Ads Act, which was included in House Resolution 1 (the For the People Act) that passed in the House in March 2019, would help voters by requiring disclosure of who paid for the online ads, with enforcement of violations by the Federal Election Commission and the DOJ.

Members of the public – or anyone who uses a computer for news —  also need to educate themselves and their communities. There are many resources online, including downloadable guides and games that teach how to spot “fake news” or simulate a trolling operation.

Voter suppression is voter suppression, whether it’s done by purging the voter rolls, turning people away at the ballot box, or feeding them lies to keep them from showing up in the first place. Unfortunately, Silicon Valley is failing in its response to address online voter suppression, as are policymakers, and both must take this threat more seriously.

The stakes are higher in the 2020 elections, and we expect to see far more disinformation circulated by foreign governments, dark money groups, and online trolls. Maintaining the authentic civic engagement and discourse that underpins a healthy democracy will require stepping up the fight against these new efforts to suppress the vote.

IMAGE: Senate Judiciary Committee member Sen. Amy Klobuchar (D-MN) displays an inaccurate Tweet telling voters to cast ballots with text messages while she questions witnesses from Google, Facebook and Twitter during a Crime and Terrorism Subcommittee hearing in the Hart Senate Office Building on Capitol Hill October 31, 2017 in Washington, DC. The committee questioned the tech company representatives about attempts by Russian operatives to spread disinformation and purchase political ads on their platforms, and what efforts the companies plan to use to prevent similar incidents in future elections. (Photo by Chip Somodevilla/Getty Images)