On Wednesday, YouTube was forced to apologize for a video that sat at the top of its “Trending” tab, which shows users the most popular videos on the site. By the time it was removed from the site, it had more than 200,000 views. The problem? The video promoted the conspiracy theory peddled by alt-right propagandists that Parkland, Florida high school student and shooting survivor David Hogg is an actor, “bought and paid by CNN and George Soros.” The conspiracy theory also found its way into a trending position on Facebook, where clicking Hogg’s name “brought up several videos and articles promoting the conspiracy that he’s a paid actor,” according to Business Insider.

The incident highlights the speed at which the spread of false information occurs on algorithmically optimized social media sites that are easy to game. What to do about it is the subject of a new report from the New York think tank Data & Society, “Dead Reckoning: Navigating Content Moderation After ‘Fake News’,” which coincidentally debuted yesterday, just as the Hogg conspiracy theory spread across the internet. Based on a “year of field-based research using stakeholder mapping, discourse and policy analysis, as well as ethnographic and qualitative research of industry groups working to solve ‘fake news’ issues,” the report sets out to define the problem set before offering four strategies for addressing it.

Usefully, the report summarizes typologies of various fake news phenomena, all of which are separate from the use of the term as a slur against sources with which one might disagree:

Mark Verstraete, Derek Bambauer, and Jane R. Bambauer (2017) propose a “fake news” typology of five different types; they make the case that three of these types – hoaxes, propaganda, and trolling – are intended to deceive, while two – satire and humor – are instead intended as cultural commentary. Related to identifying “fake news” by intent, Claire Wardle and the team at First Draft News provide their own typology of seven kinds of misinformation and disinformation on a spectrum from intent to deceive (entirely “fabricated content”) to no intent to deceive (“satire or parody”). These typologies typically categorize some types as more problematic (fabricated content, hoaxes, and trolling) than others (parody/satire, clickbait, and misleading content).

One of the benefits of these typologies is to help show what machines are not capable of determining. As the report’s authors point out, it is currently beyond the capacity of automated systems to determine what content falls into which category. Some news may start as satire but then be spread as a hoax–or vice versa. To be effective, let alone to achieve excellence, systems for moderating content need to understand both the context of the communication and the nature of the source. But even with all of the technological might of Google and Facebook, “currently automated technologies and artificial intelligence (AI) are not advanced enough to address this issue, which requires human-led interventions.”

“Automation is not great at context yet,” Facebook lawyer Neal Potts told a conference called “Context Moderation & Removal at Scale” hosted by Santa Clara University’s law school earlier this month. Will automation ever be great? Yann LeCun, Facebook’s head of artificial intelligence research, said in a talk at NYU on Tuesday that he hopes AI will perhaps achieve the intellect of a house cat in his lifetime. Over the foreseeable future then, entirely technical solutions may be out of reach. No wonder Facebook and Google are hiring thousands of content moderators that handle communications collated by machine learning. Supported by warning signals from users, these systems are what currently stand between us and a true avalanche of misinformation.

Why this dismal situation? This state of affairs is in part, as the Data & Society researchers note, because companies like Facebook are not allowing independent researchers and experts to help identify and solve the problems. “This asymmetry in accessing information over the platform significantly limits outside efforts to understand the scope of the problem,” the report notes. “Researchers currently have no scale to measure how much content platforms remove, censor, or de-prioritize on any given day.” The David Hogg conspiracy just happened to be especially egregious; who knows what else fell through the cracks on the same day.

The report’s authors see four strategies to address the growing problem of fake news:

Strategy 1 revolves around more debunking and fact-checking, relying on “coalitions of trusted content brokers, and expanding content moderation programs and policies.” This requires action across the media and technology ecosystem, and participation by users.

Strategy 2 includes tactics “geared towards disrupting the financial incentives for producers.” Addressing fraud in programmatic advertising- the automated buying and selling of digital ads, for instance, could go a long way toward removing the profit motive from trafficking in fake news. Volunteer campaigns like Sleeping Giants, which tries to convince brands not to support Breitbart due to its egregious content, have proved effective at moving the dial.

Strategy 3 revolves around “efforts to de-prioritize content and ban accounts” based on definitions of “fake news” that require generating criteria that the platforms develop.

Strategy 4 is perhaps rightfully the last resort. The report’s authors note that increasingly “governments around the world are taking steps to address ‘fake news’ and hate speech online through legislation, hearings, or establishing centers dedicated to the problem.”

On this last subject, the lawyers at the tech companies predictably urge caution. “There are overly prescriptive ways in regulation that we may not be in favor of,” Facebook’s Potts told the conference in Santa Clara. The companies are concerned that regulation will go too far, or produce unintended consequences. While governments elsewhere like the United Kingdom may be closer to taking action, those hoping for a more robust response from regulators in the United States may be disappointed, at least until certain legal and legislative questions are addressed. Because of issues around Section 230 of the Communications Decency Act and the First Amendment, “regulators within the United States currently have little recourse with which to limit the spread of misleading content,” say the authors.

In the long run, “without systemic oversight and auditing of platform companies’ security practices, information warfare will surely intensify.” How many more incidents like the David Hogg conspiracy will have to play out until we put such oversight in place? Each day more evidence mounts that the problem of fake news is a national security issue–whether it threatens to subvert a national debate on military weapons and gun control or generates, time and again, hostility toward Muslims and foreigners. The title of the report–Dead Reckoning–is apt. It’s time we address growing system failure with systemic solutions. Artificial intelligence won’t get us there anytime soon, and neither will leaving it up to the tech companies to solve on their own.

(Alex Wong/Getty Images)