Last week, the public learned of a major evolution in the handling of our national elections when news emerged that the Department of Homeland Security has been working with 36 states to install novel election security and monitoring hardware. The technology promises to provide cybersecurity and election experts in the federal government with a lens into the electoral process in these states. It’s an encouraging development—but, these days, there’s more to election security than just protecting the “raw” vote count. There’s also an imperative to address the use of digital disinformation to corrupt broader democratic dialogue and the electoral process.

On that front, the latest in a series of social media bombshells exploded last month as Facebook disclosed its discovery of efforts to use its platform to interfere with the upcoming midterm elections and its corresponding removal of 32 offending Facebook pages. That was echoed this week with Facebook’s disclosure of additional operations originating in Iran and Russia, though those don’t appear to be targeting the midterms specifically. Adding an exclamation point to Facebook’s disclosures was Microsoft’s announcement, also this week, that Russian hackers have been targeting the U.S. Senate and some conservative think tanks. Compared with the Russian disinformation campaign that infected America’s 2016 presidential campaign—largely via social media—Facebook’s and Microsoft’s disclosures are a notable step forward in the tech sector’s fight against disinformation. Credit must be given where it’s due: the disclosures illustrate that parts of the industry are willing and able to detect certain forms of nefarious activity and proactively suppress it. But, in the context of our country’s overall recognition of and response to coordinated disinformation operations online, these revelations only scratch the surface. In the scheme of things, Facebook’s actions aren’t a model for the future. They’re too little, too late. And last month’s decision by Facebook and other leading tech companies to remove most content uploaded by hate speech purveyor Alex Jones reflects a similar dynamic in which the industry remains three steps behind on issues for which time is of the utmost importance.

As a baseline, the fact that Facebook made these disclosures in such a proactive manner is a positive development. For the past year, the industry’s modus operandi with respect to revealing knowledge of such activity has been to abstain from doing so until essentially compelled by the U.S. Government (generally that’s Congress) or by incensed public opinion. Facebook’s revelations last month and again this week were apparently voluntary. That signals that the firm has decided that, at least when the company discovers some malicious incidents, it’s in the company’s commercial interests to disclose it to the government and even the public. This novel territory indicates a win for the American public as a whole: it suggests that the electorate has raised the alarm and voiced its concerns clearly enough to compel the industry to make a change that better supports the public interest. And it underscores the need for the public to continue its research, investigations, and advocacy.

At the same time, however, closely scrutinizing Facebook’s disclosures—especially last month’s disclosures on information operations specifically targeting the midterm elections—reveals worrisome signs. Here’s what we learned from the company in this first-of-its-kind disclosure a little more than three months before the midterm elections: that Facebook took down 32 pages (and corresponding accounts) which it concluded were likely connected to Russian disinformation agents. Those 32 entities had an aggregate 290,000 followers and spent a total of $11,000 on paid advertising campaigns.

But is the universe of midterm-related disinformation zipping around on Facebook limited to just 32 accounts? Almost certainly not, in light of the extensive testimony from Intelligence Community leaders as to just Russia’s activities intended to interfere with the midterm elections. And did Facebook only recently learn about these 32 accounts? Almost certainly not, given that the Intelligence Community testimony to Congress occurred six months ago.  With now less than three months to go before the elections and the warning signs publicly acknowledged six months earlier, it is, simply put, inconceivable that only 32 entities are responsible for all of the midterms-related disinformation present on Facebook today. In the scheme of things, 32 pages seem more like the work of a single person than that of a determined and resource rich hostile actor like Russia, let alone all of the hostile actors who watched, learned from, and now are replicating (or leapfrogging) Russia’s information operations. (Similarly, Microsoft said it had been tracking the Russian Government-backed hacking group’s activities for two years, and close observers have noted that some of its findings were known back in January. So why did the company hold onto the information until this close to the midterms?)

While Facebook has made strides, the evidence suggests it needs to take bigger ones—and run rather than walk. It’s possible that Facebook’s ability to detect these sorts of disinformation campaigns is not particularly sophisticated—yet. In the face of public and congressional pressure, it seems likely that the company is continually refining its machine learning algorithms designed to catch disinformation operators in action and bring them down. But the fact that the company was either unable or unwilling to act before even this limited set of nefarious accounts grew to 32 and amassed a sum total of 290,000 followers means that 290,000 Internet users—and surely many more through shares and re-shares of content—saw deliberately malign content. Facebook, in its public announcement, noted that those responsible for the accounts—presumably the Russians, though Facebook wouldn’t specify—obscured the source of funds for the paid advertising by using third parties and also used virtual private networks (VPNs) for their Internet connections. But those are not particularly advanced tactics. A company with Facebook’s level of sophistication and resourcing must be in a better position to catch such malicious actors more quickly in the future. This may necessitate a greater investment of resources and firm priority. Facebook’s announcement this week suggested a possible tradeoff familiar to those who’ve served in the intelligence community, namely between immediate disruption and continued intelligence gathering. At least when it comes to the fast-approaching midterm elections, the priority would seem clearly to favor disruption.

That points to another striking takeaway from the disclosures: the tech industry continues to find it difficult to mobilize behind key decisions regarding disinformation. That’s a frustrating reality for the electorate. Last month’s and the more recent revelations illustrate at least three areas of reticence for these companies. First, given the followers that these 32 accounts amassed, it seems likely that Facebook became aware of them at least some weeks or even months ago, if still too slowly. Yet the company waited till last month to alert Congress and, a day later, the public. That may be because the company wanted to reach a very, very high level of confidence in its assessment of these accounts before stepping in and calling them out. Given the speed at which disinformation moves through Facebook and the broader Internet, that’s a level of patience we cannot afford. Second, one wonders about the information operations targeting non-American populaces. Here, Facebook acted because U.S. political and public pressure got too intense. But what about vulnerable Internet users in countries where the political or public pressure hasn’t reached that level or where the company does not face a significant financial cost for wrongdoing? These populations appear to remain at serious risk, as recent reporting on Facebook’s impact in countries like Sri Lanka and Myanmar has made clear. Third, it’s telling that it took politicians, not Facebook, to attribute the malicious activity specifically to Russia. Naming names may be bad for business in other parts of the world; but, given the threat American democracy is facing, it seems overdue here. And, because this is not the first time that Facebook appears to have scrubbed its public disclosures of any direct reference to Russia, it’s even more alarming.

The public disclosure was an important step for Facebook. But we must not draw the conclusion that the tech sector has caught up to this problem. To the contrary, it needs to start sprinting. It’s a lesson reinforced by the even more recent decision—belated, in our view—by Facebook, Apple, YouTube, and others (though not Twitter) to remove the hate-spewing content of Alex Jones and his “Infowars.” This was another positive development, but it came only after years of urging that the companies take this step—and only after years of Jones using these platforms to build a huge following (one of his Facebook pages had almost 1.7 million followers) that will now find other ways to consume his vile and anti-democratic output. Tech companies’ desire to be cautious, even certain, before making these sorts of decisions may be understandable given what uncharted territory it is for them—but that delay has a very real cost for the health of our democratic dialogue. Indeed, it’s a cost imposed on our citizenry as a whole that we seem less and less able to afford.