“I am here today because I believe Facebook’s products harm children, stoke division, weaken our democracy, and much more.”

– Frances Haugen

A credible and articulate whistleblower, a treasure trove of internal documents, and an increasingly tone-deaf corporation created a set of dynamics that has propelled the usually-niche topic of tech policy reform into the national media spotlight. On Oct. 5, major national media outlets were live-tweeting team coverage of a Senate Commerce Subcommittee hearing. The evidence, the main witness, and the topic at hand: findings of internal Facebook research, as conveyed through the testimony of former company data scientist Frances Haugen, in a hearing on “Protecting Kids Online: Testimony from a Facebook Whistleblower.”  The hearing was scheduled as a follow up to the previous week’s discussion of online harms (see Just Security’s earlier coverage).

Recurring Themes

There were several recurring themes in Tuesday’s hearing. Among them:

  • Facebook’s “Big Tobacco” Moment:
    Revelations regarding Facebook’s internal research and its knowledge of the harmful impact of its products were akin to the infamous “big tobacco“ moment for the online platform company. This wasn’t just Haugen’s framing; Senators from both parties embraced the phrasing and analogies. Senators Richard Blumenthal (D-CT) and Cynthia Lummis (R-WY), for example, both endorsed the similarity between the current situation and historical actions by tobacco companies whose internal research showed harmful health effects while their external advertising aimed to lure ever-younger users to become dependent on their products.
  • Profits before People, Profits Before Safety, and Maximizing Profits While Ignoring Pain:
    Variations on this refrain, too, became a common one, incorporated into opening statements from Sen. Blumenthal, Sen. Marsha Blackburn (R-MS), and the Senate’s expert witness Haugen.
  • Moral Bankruptcy:
    An essential feature of this analogy is, in Haugen’s explication, the particular corporate model in which success is measured by metrics such as “meaningful social interaction” that drives virality of posts – a measure intrinsically tied to strong emotional responses rather than to positive social outcomes or beneficial impact on users. Once those metrics are set, there is little further introspection around the negative impacts of an emphasis on platform growth rather than other qualitatively important social factors, such as risks of depression, anxiety, eating disorders or suicidal thinking among teens, ethnic violence, or undermining of democracy around the world.

Solutions

Throughout the hearing, senators referred to bills that they had introduced or co-sponsored, including federal privacy legislation such as the Consumer Online Privacy Rights Act, the Filter Bubble Transparency Act, the Platform Accountability Transparency Act, and the Kids Internet Design and Safety (KIDS) Act. Some of these proposals address algorithmic design, and others propose reforms to Section 230 of the Communications Decency Act, the provision of law that grants internet platforms immunity from liability for the content posted online by their users. Sen. Amy Klobuchar (D-MN) noted that while federal privacy legislation may not be enough, it is nonetheless necessary; her denunciation of the power and role of tech industry lobbyists in blocking such legislation was particularly noteworthy in its precision-guided vehemence.

Haugen carefully – and prudently – avoided endorsing any of the specific legislative proposals currently circulating through Congress to address issues of online harms. However, Haugen repeatedly stressed the need for transparency and oversight: For Congress to have visibility into how the company’s algorithms function and the results of its research; the need for effective oversight bodies (noting, as many other have written, that the Facebook Oversight Board, is not able to meaningfully serve that function); and the need for Facebook to allow independent researchers to have access to its internal data, along the lines of the research access that other major tech platforms like Google provide. Haugen also advocated for a nuanced approach to Sec. 230 reform. In her words, Sec, 230 is “very complicated” because of the limited control platforms that have over user-generated content. However, “they have 100% control over their algorithms and Facebook should not get a free pass on the choices it makes to prioritize growth and virality and reactiveness over public safety, because they’re paying for their profits with our safety.” She also advocated for a regulatory body to which internal and external researchers could bring concerns. Right now, anything approaching that level of transparency and oversight simply doesn’t exist. As Haugen noted, “Only Facebook knows how it personalizes its feed for you.”

Why What Happens on Facebook’s Platforms Matters

Some of Haugen’s most poignant testimony came in describing the ways that cruelty and bullying online follow children home: in a pre-Instagram world, cruelty and bullying at school were things that children could escape for 16 hours while at home. But the cruelty that children experience on Instagram is often the last thing they see before going to bed at night and the first thing they see in the morning – and that parents, who didn’t grow up in an always-online environment, are ill-equipped to help their children navigate these challenges. When asked whether the company’s platforms were addictive to teens, Haugen explained the social science approach that looks at both harmful impact and self-awareness of that impact; according to Haugen, between 5 and 6% of 14-year-olds on Instagram have the self-awareness to answer yes to both those questions, and that it’s her view that a greater number of teens probably also meet the benchmarks for concern. Although this particular hearing focused on harms to children, the wide-ranging discussion also noted other concerns, such as undermining democracy, the platform’s role in global conflicts and genocides.

And the fact that this discussion took place against the backdrop of Facebook’s 5-hour outage the day before had already underscored a growing global awareness of the can’t-live-with-it-can’t-live-without-it nature of the social media behemoth. While Facebook, WhatsApp, and Instagram users in the United States flocked to Twitter during the outage, cracking jokes about the uptick of vaccinations and healthy democratic discourse that was sure to result with Facebook’s properties offline, other online commentators pointed out the troubling reality that, in many parts of the world, WhatsApp and Facebook have become central mechanisms for everything from dissemination of public information from local governments to the primary platform for small businesses and a core means of telecommunications. In other words, for at least some substantial portion of the company’s 3.5 billion users, Monday’s outage underscored the reality that there are no viable and equivalent alternatives to the core services that Facebook provides. Such dependence on the company and its platforms raises profound questions about choice and, with it, consent.

Choice is at the heart of every western framework for information privacy and data protection. Within the U.S. and European legal systems, almost any collection and use of personal information can be construed as legally permissible, so long as there’s been sufficient notice about what’s being done and the data subject – the person to whom the data pertains – has given consent. There are notable exceptions to the notice-and-consent framework, and a number of conditions that impose heightened requirements for the specificity of notice and sufficiency of consent. Exceptions include circumstances such as the employment context in the EU, where data privacy regulations presume that the bargaining power between employer and employee is sufficiently uneven that consent is disfavored as a basis for data collection. Heightened requirements include those articulated in the EU’s General Data Protection Regulation with respect to special categories of personal data, and in the heightened notice-and-consent requirements in the U.S. under the Children’s Online Privacy Protection Act (COPPA). But as a general matter, consent forms a central basis for collection and processing of personal information under the privacy and data protection regimes of countries around the globe.

Can Consent Exist without Transparency?

Although these systems focus on consent, some of the most striking deficiencies in the legal frameworks lie in the maintenance of a marketplace where individuals have few choices. As I and others have noted, elsewhere, “consent” is less meaningful, if not essentially meaningless, when there are no effective alternative ways to obtain the same or similar important services. And Haugen’s testimony underscores longstanding questions about whether user consent can be meaningful in an online ecosystem where it’s difficult for users to understand how their data are actually used. Instagram’s terms of service inform users what data will be collected and how it will be used and advise accountholders that their data may be used to support “relevant and useful” advertising. The information provided doesn’t, however, tell teenage girls that there’s a 1-in-3 chance that using the platform will leave them feeling worse about their bodies and themselves, or that using the platform may result in promotion of content glamorizing anorexia and self-harm into their feeds, or that use of the platform presents a quantifiable risk that they might develop an eating disorder – a condition that, according to Sen. Klobuchar’s questioning, has the highest mortality rate of any mental health concern. And, as noted, teenage mental health concerns are just one example of social harms that these corporate products help generate.

The Fallout So Far

So far, Facebook’s efforts to undermine Haugen’s testimony have fallen flat. The highest-profile victim: Facebook communications director, Andy Stone. His tone-deaf comments pointing out that Haugen hadn’t worked directly on many of the projects she was describing prompted a call-out from Sen. Blackburn, who challenged him to come before the Committee to provide sworn testimony under oath. (Perhaps appropriately for the subject matter and context, Stone was also epically ratio’ed online – that is, his tweet received only a few hundred likes while prompting thousands of negative comments – a real-time demonstration of the power of social media for instant feedback.)

More importantly, senators from both parties made clear their displeasure with what they saw as dissembling the previous week from Global Head of Safety Antigone Davis, and challenged Mark Zuckerberg to appear before the committee in person to explain his actions and the decisions made within his company.

Next Steps

Also mentioned at the hearing were a few other tantalizing items that this and other committees are sure to follow up on. Haugen noted her deep concern over the national security implications of Facebook’s approach to content moderation, indicating that she was engaging with other committees on that topic. Although she didn’t provide details, it wouldn’t be a stretch to presume that these concerns could be of interest to both the Homeland Security and Intelligence committees in the Senate and House. The subcommittee members also made clear their interest in antitrust implications and proposals to break up Facebook. Haugen noted that as a data scientist, she’s not in favor of that approach – on the apparent basis that breaking up Facebook would mean that there was less centralization of data which could be studied for its impacts. This topic is sure to get more scrutiny both in the Commerce committee, which doesn’t have primary jurisdiction over antitrust laws, and within the Judiciary Committee, which does. Last, but not least, Sen. Blackburn noted that Monday had also brought news reports that Facebook had suffered a breach of personal information relating to 1.5 billion users. Although details regarding that alleged breach have yet to emerge, it would be among the largest in online history, and – to the extent that it involves data of persons in the United States – it would underscore the insufficiency of data breach laws in most states to provide redress for compromises of personal data unless those breaches involve information, such as credit card and social security numbers, that can be directly monetized. (The exception to this rule is California, where the California Consumer Privacy Act provides for a private right of action for breaches of a very broadly defined range of personal data.) Given the comments from several senators about the need for federal privacy legislation, and the upcoming full committee hearing on data security online, the Facebook whistleblower revelations are sure to continue to fuel bipartisan interest in comprehensive federal privacy legislation within the committee and to enhance the possibility that a federal privacy bill might make its way through Congress this session.

More to come on Wednesday, Oct. 6, with the Senate Commerce Committee hearing on data protection online. This time, the full committee will be hearing from current and former officials at the Federal Trade Commission, the Identity Theft Resource Center, and a trade association and advocacy group for tech startup companies called Engine. Although the Oct. 6 hearing on “Enhancing Data Security Online” was scheduled before Haugen’s explosive revelations appeared on 60 Minutes and in subcommittee testimony and has a somewhat different focus, it’s sure to include further discussion of the implications of the Facebook revelations – implications, that is, for both the tech giant itself and  regulation of the online ecosystem more broadly.

Photo credit: Former Facebook employee and whistleblower Frances Haugen testifies during a Senate Committee on Commerce, Science, and Transportation hearing on Capitol Hill, October 05, 2021 in Washington, DC. (Photo by Jabin Botsford-Pool/Getty Images)