In the months since the Jan. 6 riots at the U.S. Capitol, technology-politics conversations have been consumed with speculation about when, or whether, former President Donald Trump might see his social media accounts restored. After the insurrection, including deaths at the Capitol and a serious attempt to prevent certification of the election, plus the ongoing “Big Lie” – the sustained set of evidence-free claims that the 2020 presidential election was stolen from Trump – all of the major social media platforms, including Facebook, Twitter, and YouTube, suspended Trump’s access to the multi-million-follower bullhorns that those social media accounts provided. At long last, the risks to democracy in the United States had simply gotten too high. Content that enrages, engages – but insurrection, it seems, crossed a line. The decision to suspend Trump’s accounts might reflect an important realization: No tech platform will flourish over the long term in a nation with deeply destabilized politics.

Against this backdrop, Republican lawmakers amped up the volume on their complaints that conservative voices were being censored online, with many calling for the repeal of the current law, Section 230 of the Communications Decency Act, which shields platforms from liability for content posted by their users. (The right-wing push to revoke Section 230 doesn’t actually follow a logical line from their concerns, since removing liability protections would likely lead platforms to be more aggressive in removing problematic content, not less – so could lead to increased chilling of speech. But that’s a separate topic, worthy of its own discussion.)

Democrats, meanwhile, continued to be alarmed at the wildfire spread of conspiracy theories like QAnon, and increased their push for a variety of legislative reforms, including requiring greater transparency in paid political posts online, and mechanisms for imposing liability for the ways that social media algorithms function. These changes would allow platforms to retain immunity for content posted by their users, but would hold the platforms responsible when certain kinds of false or harmful content made their way into users’ feeds as a result of platforms’ design decisions that highlighted and prioritized that content.

It hasn’t been just politicians and pundits who’ve been worried about the growth and spread of disinformation. Researchers and public health advocates have expressed sharp alarm over the rapid spread of anti-vaccination propaganda online. The U.S. Intelligence Community has been publicly warning about foreign adversaries’ use of social media to destabilize American democracy, and democracy in other nations, since 2016. Academic and think tank researchers are continuing to analyze the scope of the challenge in the United States and around the world; journalists have written at length about the complexity of content moderation for corporations and the grueling, soul-draining impact on the individuals getting paid to screen hateful and harmful content – from child pornography to ideologically based beheadings – and decide which posts the social media companies should take down.

Against this backdrop, Facebook’s decision to create an “Oversight Board” was greeted with a swirl of chatter and fanfare – probably garnering altogether more attention than it deserved. The framework was a simple one: Facebook created a trust that would pay prominent individuals from around the world to serve on an advisory board (the Facebook Oversight Board, or OSB) that would check the math on Facebook’s most complicated, or at least its highest-profile, content moderation decisions. It takes cases referred to it by Facebook, like the Trump case, but also chooses cases among those appealed by users. The Board’s members have hard-earned reputations as serious thinkers in fields relating to technology, ethics, and the law, and the Board insists that its deliberations are independent of Facebook; that although the Board’s work is funded by the platform giant, its members vote their own consciences in how they review the questions in front of them. The Board did overturn Facebook’s content moderation decisions in a number of its initial cases.

So, what happened on Wednesday that created all the new buzz? The OSB undertook a review of Facebook’s decision to levy an indefinite suspension on Trump’s account. The OSB reviewed information relating to his posts, the Facebook decision, Facebook’s policies, and how Facebook had handled complaints or concerns about other accounts. The OSB asked the platform for additional information, conducted a lengthy analysis, and ultimately referred the question back to Facebook, saying that the platform has the discretion to restore his account at some point in time or permanently delete it, but that it wasn’t permissible for Facebook to impose an indefinite suspension with no timeline or criteria for reactivation.

The release of the Board’s report led to countless hot takes as well as thoughtful analysis. Why, then, has it seemed like such a letdown?

The answer may lie in the fact that an advisory board, no matter how well constructed or how independent, isn’t sufficient to counter all the ills associated with disinformation on the internet. The OSB is empowered to issue instructions to Facebook on individual content moderation decisions, or user bans like the Trump case. But in order to be meaningful, oversight mechanisms need to have teeth – some ability to impose penalties or sanctions. The Board lacks the ability to do that, just as it lacks the ability to require that Facebook implement new policies or procedures. (Facebook has said that it will treat OSB decisions as “binding.” However, the Board’s Charter includes a number of escape clauses, noting that Facebook’s response to decisions and recommendations will be modulated by the company’s assessment of issues such as technical and operational feasibility. More importantly, there does not appear to be any contract or other framework that makes the Board’s decisions legally enforceable, either by the Board or by any third party. With less than a dozen decisions from the OSB so far, only time will tell whether the “binding” nature of Board decisions has genuine legal substance or is merely the window dressing of public relations.)

Does that mean the Board’s work is fruitless? Not at all. Tech companies, like other corporations, should be encouraged to implement strong compliance mechanisms, and setting up a blue ribbon panel to provide scrutiny is one useful tool to do that. The rest of the oversight toolkit, however, that could potentially guide and constrain Facebook’s actions remains bare. Anyone who has been responsible for oversight and compliance programs in large and complex organizations (as I was at the NSA) knows that a critical feature of oversight includes a culture of compliance that is both anchored at the working level through extensive and repeated training and a cadre of compliance professionals and that is repeatedly and clearly stated and sustained by the “tone from the top” – the unequivocal statements from senior leadership that oversight and compliance and corporate ethics are integral values of the organization, inseparable from the mission itself. This gets us to another indispensable feature: integration of compliance goals with the business mission: ensuring that ethical values and practical oversight mechanisms are incorporated into the organization’s business operations at every stage, from product design to productization. And, of course, it helps immensely if there is a clear legislative or regulatory standard – with penalties for noncompliance – that guides organizations in what they can and can’t do.

On the first of these fronts – culture and operations – Facebook has repeatedly failed. The tone from the top has, until recently, largely been one of telling legislators and concerned advocacy groups to talk to the hand. Only recently has Facebook undertaken steps to implement anything that looks like an ethics or oversight review board, and to many skeptics, the creation of the review board appears to have been a somewhat cynical attempt to provide cover for what the business isn’t willing to do. The Board’s recent decision lends some credence to those concerns, as the Board’s made clear that Facebook refused to provide it with crucial information, such as answers to questions about what role its content-promotion algorithms might have played in the visibility of Trump’s posts.

Ultimately, however, as it easy as it is to lay blame at Facebook’s feet for a host of content-moderation ills, the problem is a more wide-ranging one: If the former president and his supporters weren’t pushing the Big Lie, weren’t still trying to fan embers of discontent into political flames, then the weaknesses in Facebook’s oversight framework would be far less consequential, in the United States and elsewhere around the world.

Corporate oversight mechanisms will continue to hold promise as one useful tool in combating online abuse and disinformation; but the particular mechanisms of the OSB aren’t sufficient to counter the content moderation challenges internal to platforms like Facebook, or to counteract the toxic off-platform behavior and on-platform effects of determined purveyors of disinformation. No amount of duct tape, however shiny, can substitute for the lack of a moral foundation or a structure of minimal commitment to factual accuracy in political debates; no platform mechanisms can, by themselves, create a cultural expectation that debate should be focused on policy discussions rather than on exploiting the enraged participants in a culture war.

Whether the next shiny object in the content moderation wars is a conspiracy theory at home or an authoritarian regime abroad, it will take more than the duct tape of high-profile advisory boards to combat systemic problems of disinformation, from their roots to where they flourish online. Until the approaches to countering disinformation become as wide-ranging as the problem – until we address everything from civics education to digital literacy, federal election law reform and commitment to democratic processes over culture wars – decisions like the recent one from the Board will continue to be a letdown, as the problems with content moderation go beyond anything that this Board, or even a more rigorous and empowered Board, can fix.

Image: In this photo illustration the Social networking site Facebook is displayed on a laptop screen on March 25, 2009 in London, England. Photo by Dan Kitwood/Getty Images