The Facebook Files dominated tech industry news in the past week, as whistleblower Frances Haugen testified before Congress, talked with 60 Minutes, and provided a consistent and credible account of her concerns about the company’s practices –involving what she describes as the company’s predilection for prioritizing profits over people. Although congressional hearings thus far have focused primarily on harm to children, the conversation is sure to turn to the other concerns raised by Haugen. And with each of those issues detailed in whistleblower complaints filed with the Securities and Exchange Commission (SEC), it may be that the company has more to fear from SEC action than from regulatory steps Congress might take. Each of the eight complaints is described in brief below, along with a quick refresher on what this could mean in terms of company valuation, litigation risk, and how one of the world’s largest tech companies conducts it press and shareholder relations going forward.

The SEC Whistleblower Process

Following the financial crisis of 2008, Congress enacted the Dodd-Frank Act, designed to overhaul the nation’s financial system and impose regulatory protections that could help guard against future economic fallout from corporate wrongdoing. As part of that package of reforms (some of which have since been repealed), Dodd-Frank established a framework for whistleblower complaints: it encourages company insiders to come forward to the SEC with concerns about malfeasance within publicly traded companies, allows whistleblowers to receive a monetary award if their complaints result in successful SEC enforcement action, and prohibits companies from taking retaliatory action against whistleblowers.

The program has been a successful one: By 2020, whistleblower complaints had resulted in $2.7 billion in sanctions against companies, $1.5 billion in disgorgement of “ill-gotten gains,” awards of $500 million to whistleblowers, and a return of $850 million to shareholders harmed by the corporate actions that came to light through whistleblower complaints. According to the SEC, the program continues to grow, as a record-breaking number of complaints were received in 2020.

The whistleblower sections center around the provision of original information to the SEC – that is, information that the SEC had not encountered through other means. Importantly, once the SEC has received a whistleblower complaint, it has the discretion to share that information with the Department of Justice, appropriate federal and state departments or agencies, and state attorneys general, any of which might consider further investigative or enforcement action, including criminal proceedings.

The Facebook Whistleblower Complaints

Frances Haugen has filed eight whistleblower complaints with the SEC. Each of them charges Facebook with making material misrepresentations and omissions in statements regarding various aspects of its operations – statements which may be material to the company’s valuation, and which therefore could harm shareholders and potential investors and violate the company’s disclosure obligations as a publicly traded entity.

1. Facebook’s Platforms’ Harms to Children. The general contours of this complaint have become familiar through Haugen’s congressional testimony. The complaint is a pithy five pages, alleging that “Facebook misled investors and the public about the negative impact of Instagram and Facebook on teenagers’ mental and physical health.” At its heart: Mark Zuckerberg has testified before Congress that he doesn’t believe the company’s platform harms children – all while knowing that 13 percent of teenage girls on Instagram say the platform makes thoughts of suicide and self-injury worse, that 17 percent of teen girls on Instagram say the platform makes eating issues such as anorexia and bulimia worse, and that Instagram makes body image issues worse for one in three teen girls. According to the complaint, the misstatements are material for a least three reasons: First, if teens and parents “understand the truth about harms from Instagram and Facebook Blue, that can be expected to reduce the user base, advertising revenue, and ultimately investors’ returns.” Second, some shareholders would choose not to invest in Facebook for ethical reasons. Third, investors might decline to invest in the company out of fears that Facebook’s impact on children and teens could prompt regulators to take a more stringent approach that could impact the company’s operations and bottom line.

2. Facebook’s Role in the Jan. 6 insurrection attempt. In response to the Jan. 6, 2021 attack on the U.S. Capitol, Facebook executives have made a number of statements regarding the company’s actions to detect and counter speech that incites violence. The compliant focuses on a handful of those: the company’s response to a request from the U.S. House Committee investigating the Jan. 6 attack to provide information relating to efforts to overturn the election and the potential role of domestic violent extremists and foreign malign influence in the 2020 election; Mark Zuckerberg’s congressional testimony on these and related issues; and statements made in investor calls and talking points for company advertising sales. According to the complaint, Facebook repeatedly stated that it removed language and accounts that “incite or facilitate violence” or “that proclaim a hateful and violent mission.” The company has also said, “We work to reduce the incentives for people to share misinformation” and “we take action against Pages that repeatedly share or publish content rated false, including reducing their distribution.” Internal analysis, however, was at odds with those characterizations: According to the complaint, Facebook took actions against as little as 1 percent of violent speech on the platform; it knew that its algorithms were steering users to conspiracy recommendations; it allowed prominent accounts to spread disinformation and incitement under the exceptions established in its cross-check or “XCheck” program (more on that below); and it chose not to implement recommendations from its research teams that could have mitigated the spread of violent extremism on its site. Not only did Facebook make these misleading statements, according to the complaint; the company’s culpability was compounded by its knowledge that its handling of these issues could affect the company’s advertising revenue and its valuation.

(For further details on Facebook’s role in the Jan. 6 attack on the Capitol, this deep dive analysis walks through additional details, not in the SEC complaint, comparing Facebook’s public comments to its internal research.)

3. Facebook’s role in ethnic violence and “global division.” According to the complaint, with some 2.8 billion users worldwide, nearly two-thirds of whom speak a language other than English, Facebook “lacks adequate systems, and facilitates polarizing misinformation and ethnic violence, across the world,” despite its many public statements that it is “committed to international issues.” The complaint points to Facebook’s widely criticized role in fomenting genocide in Myanmar and its assurances that the company was “now taking the right corrective actions,” including the use of artificial intelligence to identify inciting content for removal. Despite these assurances, in 2020 a former Facebook data scientist stated that she had identified dozens of countries where the platform was being used to enable politicians to mislead the public, and multiple “blatant” attempts by foreign national governments “to abuse our platform on vast scales,” such as the coordinated inauthentic behavior that supported politicians in Honduras and Azerbaijan. Despite these internal concerns and public assurances, Facebook had not invested sufficient resources to develop the language and cultural fluency necessary for effective content moderation in all of the regions of the world where its platforms are in use, and at times being abused. Internal records revealed some instances in which this lack of fluency meant that Facebook was unable to flag or take action on violent or inciting content.

4. Facebook’s Special Treatment for High Profile Accounts. This complaint focuses on the discrepancy between the company’s published policies, and the undisclosed exceptions it made for prominent influencers who were allowed to violate platform policies with impunity once they had been “whitelisted” under the company’s “XCheck” program. According to the complaint, Mark Zuckerberg and others consistently claimed that the company’s policies were applied fairly and consistently, that no exceptions were made for politicians, and that “when we identify or learn of content that violates our policies, we remove that content regardless of who posted it.” However, the company did not explain that under its “Cross-Check” or “XCheck” program, less than 10 percent of content tied to XCheck entities was reviewed, and many of the people, pages, and entities whitelisted in Cross-Check had been exempted from enforcement. With nearly a million entities in XCheck, the potential for serious abuse exists on a vast scale – and was exemplified by one incident in which a prominent influencer posted revenge porn that amassed 56 million views before the video was taken down.

5. Facebook’s Use of “MSI” Metrics that Promote Misinformation and Hate Speech. In the wake of criticisms stemming from the 2016 U.S. election cycle, Facebook changed its algorithms to prioritize something it called “meaningful social interaction” (MSI), a metric tied to user actions such as comments, reshares, or likes of content produced by others. Facebook prioritized MSI (defined as all interactions between two users where the initiator is not the same as the receiver– e.g., a like on a friend reshare, or a comment reply to a user’s comment on a public post) as a way to increase content on the platform at a time when content volumes had been falling. Facebook consistently told investors, the press, and Congress that the 2018 shift to MSI-focused algorithms was intended to “put friends and family at the core” of users’ experience, strengthening relationships and improving wellbeing and happiness of users. Facebook’s investor documents went even further, noting in 2021 that the pivot to MSI-driven metrics was done “to prioritize posts from friends and family… to try to minimize the amount of divisive content that people see. We have reduced clickbait headlines [and] reduced links to misleading and spam posts.” The documents continued, “Using ‘engagement-bait‘ to goad people into commenting on posts is not a meaningful interaction, and we will continue to demote these posts in News Feed.” Despite these statements, internal documents reportedly show that MSI-driven algorithms made “outrage and misinformation” more likely to be viral, and that due to the feedback loop of engagement response, “MSI is leading [content creators] to post more divisive and sensationalist content in order to gain distribution.” Further, internal research showed that moving away from MSI would “positively impact metrics for misinformation and hate.” In addition to the warnings from internal researchers, external users of the platform warned of the issue, as political parties told Facebook of their concerns that the 2018 algorithm modification “has changed the nature of politics. For the worse…. The emphasis on ‘reshareability’ systematically rewards provocative, low-quality content.” Specifically, political parties in the European Union (EU) “feel strongly that the change to the algorithm has forced them to skew negative in their communications on Facebook, with the downstream effect of leading them into more extreme policy positions,” with one political party in Poland noting that their Facebook posts have shifted from 50/50 positive/ negative in tone to 80 percent negative and 20 percent positive, simply to maintain engagement under the new algorithm. Even more damning, the company’s internal teams projected that removing the MSI model would decrease misinformation by 30-50 percent, yet the company declined to adopt this approach, or any of the other measures that the research team recommended as means of combatting the virality of misinformation and hate speech. Finally, internal documents showed repeated concerns that the company’s decision-making on content policy was routinely influenced by political considerations, with communications and public policy teams blocking changes that could improve misinformation and divisiveness if they perceived those changes as having the potential to “harm powerful political actors.”

6. Facebook’s Promotion of Human Trafficking, Slavery, and Servitude. Until now, the role that Facebook has played in human trafficking has received relatively little attention compared with higher-profile concerns such as the spread of conspiracy theories and extremism. Haugen’s complaint, however, alleges that Facebook’s platform “enables all three stages of the human exploitation lifecycle,” a problem so significant that, aswe have recently learned, Apple threatened to remove Facebook and Instagram from its App Store. The abuse spanned across platforms as traffickers, recruiters, and facilitators used Facebook and Instagram profiles, Facebook Pages, Messenger, and WhatsApp. Following the threats from Apple, Facebook reportedly told CBS that, “Sex trafficking and child exploitation are abhorrent and we don’t allow them on Facebook. We have policies and technology to prevent these types of abuses and take down any content that violates our rules.” Despite these firm declarations – that Facebook prevents these abuses and takes down any violating content – internal research noted that “the platform is being used to coordinate and promote domestic servitude.” Further, the complaint alleges, Facebook knew that domestic servitude content was on the platform prior to Apple’s takedown threat and a 2019 BBC news expose – but the company failed to inform the SEC or investors about these issues and was “under-enforcing on confirmed abusive activity with a nexus to the platform.”

7. Facebook’s Mischaracterization of its User Base. The heart of this complaint it that, while Facebook’s stock valuation is based almost entirely on predictions of future advertising revenue, the company has “for years” misled investors and advertisers about key data, including the amount of content produced on the platforms and the growth in the number of individual users, particularly in what are deemed high-value demographics, including American teenagers and young adults. As a result, the company has engaged in “systematically overcharging advertisers, and fraudulently collecting significant revenue.” A key component of this misvaluation arises from single users with multiple accounts (SUMA). Facebook had publicly estimated SUMAs to be approximately 5 percent to 10 percent of the company’s 2.8 billion worldwide monthly active users across its platforms. Yet its “reach and frequency” advertising models failed to take these SUMAs into account, effectively charging advertisers higher rates tied to a formula of per-user-reached rates when they should have charged a lower rate reflecting the reality that advertising was reaching lots of accounts, but – due to SUMA – a smaller number of users. (As one internal document noted, “won’t this cause the [reach and frequency] to violate their contract? If the ad is targeted to 1M accounts with a guarantee of 90%, and we delivered to 900k accounts but only 800k users [due to SUMA], won’t this make R&F pay [a] penalty if we report 800k as coverage?”)

8. Facebook’s Inflated Claims Regarding Removal of Hate Speech. According to the complaint, “Facebook misled investors and the public about ‘transparency reports’ boasting proactive removal of over 90% of identified hate speech when internal records show that ‘as little as 3-5% of hate’ speech is actually removed.” It points to Mark Zuckerberg’s testimony in front of Congress that Facebook removed some 94 percent of hate speech and that “incitement of violence is against our policy and there are not exceptions to that including for politicians.” The complaint points as well to similar company statements in shareholder calls.

There’s a particularly interesting dimension to this complaint: In advance of the 2021 Facebook annual meeting, shareholders made a proposal to evaluate the company’s response to false and divisive information, noting that “the Facebook brand has been diminished in recent years due to the platform’s use as a tool for gross disinformation, hate speech, and to incite racial violence” – a reflection of the “Stop Hate for Profit” campaign that led some prominent companies to suspend advertising on Facebook during the summer of 2020. Facebook rejected the shareholder proposal on the basis that it was “unnecessary” due to transparency efforts the company had already undertaken, the company’s work with outside researchers, and the fact that “we have taken meaningful action over the years to fight hate on our platforms,” touting its rate of proactive identification of hate speech. The detailed metrics in the complaint are difficult to compare, chiefly because the framing used by company executives differs from the framing of the internal research, and the documents underpinning the compliant haven’t been publicly released – leaving readers of the complaint to try to parse a less complete set of information than that which has presumably been supplied to the SEC. Nonetheless, Facebook’s protestations seem to fly in the face of internal research noting that the company had “compelling evidence that our core product mechanics, such as virality, recommendations, and optimizing for engagement, are a significant part of why these types of speech flourish on the platform… the net result is that Facebook, taken as a whole, will be actively (if not necessarily consciously) promoting these types of activities. The mechanics of our platform are not neutral.” Further underscoring that point, internal documents noted that political parties across Europe had complained that the 2018 changes to Facebook’s algorithm were exacerbating the incentives for campaigns to take positions that heightened reactions of anger, raising “worry about the long-term effect on democracy.”

What Happens Next

Under the Dodd-Frank process, the SEC has an obligation to review the materials submitted by the whistleblower, and then may use its discretion in deciding whether to launch an investigation which could – but needn’t necessarily – lead to fines or other enforcement action. The SEC has discretion in how long it takes to review the complaints, and it will likely keep any details of its review process close-hold while the investigation is ongoing. In the meantime, the complaints have provided a road map for plaintiffs’ counsel who may be considering whether to launch shareholder derivative litigation. The key to those claims, like the SEC complaints, won’t be the questionable morality of putting “profits over people.” Instead, the legal risk to Facebook in any investor-related actions will be whether plaintiffs can show, or the SEC concludes, that the mismatch between Facebook’s internal knowledge and its public-facing pronouncements are material, and false.

Image: TOM BRENNER/POOL/AFP via Getty