The fallout from the Facebook Files, a series of Wall Street Journal reporting and podcasts describing internal company documents, continued on Capitol Hill last week with a hearing in the Senate Commerce Subcommittee focused on evidence of mental health harms to children using the company’s platforms.

In addition to highlighting Facebook’s problematic approach to minors on its platforms, the hearing was notable in how readily its insights could be applied to several concerns that have been raised about the company, such as propelling violent domestic extremists and other national security issues. Congress’ demonstration of its ability to successfully overcome the technological learning curve at last week’s hearing, for example, speaks to its ability to conduct oversight and impose reforms on Facebook across a range of issue areas. What’s more, the large discrepancies visible at the hearing between Facebook’s public proclamations of ignorance of the platform’s potential harms to the public and its internal documents showing otherwise appear to reflect a more general pattern. These insights are important to keep in mind as the Senate continues its oversight with a hearing this week addressing more revelations about Facebook that were the focus of the 60 Minutes news broadcast Sunday night.

Insights from the Senate Hearing: From Finsta to Mental Health and Beyond

How bad is the problem of Facebook’s platforms and teen mental health?  Sen. Marsha Blackburn (R-TN), posing a question in which she said she was quoting from Facebook’s own internal research teams, noted that “aspects of Instagram exacerbate each other to create a perfect storm and that perfect storm manifests itself in the minds of teenagers in the form of intense social pressure, addiction, body image issues, eating disorders, anxiety, depression, and suicidal thoughts.” The experiment undertaken by staff for Sen. Richard Blumenthal (D-CT) make it easy to see why: they created a fake Instagram account for a 13-year-old girl who followed what they described as a few easily findable accounts on extreme dieting and eating disorders. It took less than 24 hours for Instagram’s algorithms to begin recommending content relating to self-harm and eating disorders.

Through blog posts, congressional testimony, and selective release of annotated documents, Facebook has attempted to put these revelations in a more positive light.  The senators at Thursday’s hearing were having none of it.

Sen. Blumenthal signaled the heart of these bipartisan concerns in his opening salvo: “We’re here today because Facebook has shown us once again that it is incapable of holding itself accountable. … [W]e now have deep insight into Facebook’s relentless campaign to recruit and exploit young users.  We now know while Facebook publicly denies that Instagram is deeply harmful for teens, privately Facebook researchers and experts have been ringing the alarm for years.” Blumenthal brought receipts. He recounted previous exchanges between Congress and Facebook chief Mark Zuckerberg, in which members had asked whether the company’s research had ever found that its platforms had a negative effect on the mental health of teens and children, and Zuckerberg had responded, “We are not aware.”  The leaked documents flatly contradict that assertion, indicating that time spent on Instagram increases body dissatisfaction among teen girls, that teens “have an addict’s narrative about their use” of the platform, and that a third of teens felt they had little control over how Instagram makes them feel.

It’s worth noting that while the Wall Street Journal has published just a handful of documents, and Facebook has formally released just two, Sen. Blumenthal indicated that “thousands” of the company’s documents had been provided to the committee by the whistleblower.  If the lesson of congressional investigations on other matters is to serve as a model, we can expect to see more excerpts from the leaked documents in the future, either as exhibits to hearings, appended to a report, or released in some other fashion.

At the hearing, Sen. Ted Cruz (R-TX) lambasted Facebook Head of Safety Antigone Davis for appearing remotely, even though she was speaking from only a few blocks away in DC, while  Sen. Blackburn made pointed comments about Davis’s perfectly curated zoom backdrop while making clear that the Senator had no intention of being distracted from the substance of the hearing by surface-level polish.  Although it’s always worth noting the tone of a hearing, the key points weren’t these atmospherics, but the substance – and what the members’ questions reveal about the kinds of regulatory approaches Congress might take.  Setting any finsta memes aside, this hearing on social media platforms demonstrated Congress’s growing sophistication in understanding the mechanics and the impacts of the technologies that are most central to the ways personal information is used by prominent tech companies and influencing Americans’ everyday life.

The Committee members’ questions got to the heart of many of the key legislative and regulatory issues that intersect with current and proposed legal restrictions.  For example:

  • Blackburn asked: How do you enforce the policy against users under age 13 on the platform, and how many users under 13 does Facebook know are on its platforms?
    • Facebook did not appear to fully answer these questions, but knowingly targeting users under 13 would run afoul of the Children’s Online Privacy Protection Act (COPPA) unless done in carefully circumscribed ways – including with parental notice and consent – that comply with the law.
  • Blackburn also asked: When the company collects data on minors, has it obtained consent from parents, and if so, by what means?
    • After Facebook testified that it obtains parental consent, Blackburn requested a copy of the consent form used, which could be directly relevant to an investigative or enforcement action by the Federal Trade Commission (FTC) for violations of COPPA. It could also support review by the FTC (as well as by state consumer protection authorities) for more broad-based actions charging Facebook with engaging in unfair or deceptive acts or practices, on a theory that it’s not hard to make a case that collecting information from children under 13 could be viewed as both “unfair” when not carefully constrained and properly circumscribed, and also that it’s “deceptive” to the extent it is inconsistent with the company’s stated policies.
  • Ben Ray Luján (D-NM) asked: Is the company collecting personally identifiable information specific to individual children under the age of 13 without the consent of their parents or guardians?
    • Although Facebook dodged this question, the implications were clear. If the answer is yes, the data collection would likely run afoul of COPPA, as there’s no indication Facebook was obtaining the kind of specific, detailed parental notice and consent required under COPPA.  Facebook’s reply doggedly focused on the policy that theoretically prevents users under 13 from having accounts on its platforms. In addition to being a non-answer on the question of data collection from under-13-year-olds on the platforms, it also didn’t answer whether the company’s routine collection of data from users without Facebook accounts includes collection from children under 13. Presumably if the company had a good answer to give, we would have heard it.
  • Mike Lee (R-UT) asked: Do you allow advertisers to target minors?
    • Facebook admitted it allowed ads to target “young people,” but with little detail. As with the other major themes that characterized these lines of questions, the answer to this one could implicate COPPA as well as federal and state consumer protection laws and the prohibitions against engaging in unfair and deceptive practices.
  • Multiple senators asked: Will Facebook release the research underlying the two slide decks that it has publicly released, or other research referred to in the Wall Street Journal’s reporting?
    • As with a number of other points in the hearing, Facebook dodged the question, stating that it was “looking at” the possibility of providing additional research information, and raising vague objections about ill-defined “privacy” concerns as a potential impediment. Here, it is worth noting that the release of information that does not include personally identifiable information of users is unlikely to violate any existing privacy laws.  The frequency with which tech company executives seem to hide behind a fig leaf of privacy as an excuse to prevent researchers’ from reviewing platform-related information could easily prompt provisions in draft federal privacy legislation that go directly to the heart of this objection, making clear that spurious privacy claims cannot serve as a justification for refusing to share a large swath of personal data with researchers; companies may continue to raise other objections, but this one could be readily put to rest by appropriate statutory clarifying language.
  • Several senators asked what steps, if any, Facebook took to mitigate the mental health harms that can result from use of platforms like Instagram?
    • Here, the Committee members seemed to be speaking from the standpoint of general ethics concerns – but it points to a larger set of questions, discussed below, about the ways in which Congress could look at pressuring tech companies to adopt a standard similar to the Belmont Principles for human subjects research that currently apply to all federally-funded programs. This point, although not raised in the hearing, is discussed in more detail below as a potential line of future congressional legislation or inquiry.

Next Steps for Congressional Oversight and Legislative Reform

So what can Congress do with respect to these issues?  The seeds for congressional action are scattered through the types of questions members asked – seeds that create a trail for consumer protection authorities and advocates to follow in putting together legal action.

First, Congress can continue building out the record on these issues through hearings like this one and the ones scheduled for Tuesday, Oct. 5 (with a Facebook whistleblower) and Wednesday, Oct. 6 (on data security).  Although the news highlights will come from the live-streamed and televised hearings, the real power of the investigations will unfold as this and other committees decide which documents to release publicly and what findings to capture in a detailed committee report — one that will likely be written by professional committee staff who will have spent hundreds of hours combing through thousands of documents, scouring hundreds of hours of recent and past witnesses’ testimonies and public statements, reviewing written responses to congressional questions for the record (QFRs), and all of the other fact-finding tools Congress has at its disposal.  With the bipartisan outrage over Facebook’s apparent pattern of deception, it’s quite likely that committee reports on these issues will be fairly comprehensive and capable of being completed in relatively rapid time.

Second, Congress can look at strengthening the existing restrictions and penalties under COPPA in ways that focus on the harms at issue here.  These could include heightened consequences for a range of actions, such as carrying out mental health research on unsuspecting minors, allowing advertising in certain categories to be targeted to minors, and failing to take proactive steps to mitigate mental health harms in children that are tied to online platform use.  Legislators could also impose new or increased obligations relating to the well-being of children, such as requiring research on platform-related mental health trends of minors to be released to, and reviewable by, an independent review committee; requiring companies to submit regular transparency reports to Congress regarding mental health of minors on the platforms; and requiring companies to take more proactive steps to deter, identify, and remove underage accounts and to counter algorithmic outcomes that cause certain kinds of harmful content to be suggested in the feeds of minor users.

Third, Congress can consider incorporating similar restrictions into a piece of comprehensive privacy legislation.  Proposals for federal privacy legislation have been kicking around the halls of Capitol Hill through a series of successive legislative sessions, and – although there is more momentum for it now than ever before – it’s not at all clear that a comprehensive privacy bill will pass through both houses this session.  However, such legislation, if it advances, would provide an opportunity for Congress to impose transparency and mitigation requirements for the impact of social media on all users, not just on minors.  It could also make clear that de-identified information or aggregate information, properly defined, may be studied by external researchers subject to appropriate protections without violating privacy concerns.

Fourth, one of the most important reforms Congress could consider is creating a legal framework which requires social media platforms and other data-intensive technology companies to comply with the ethical principles applicable to federally funded human subjects research whenever the companies are carrying out research or experiments designed to influence behavior, study mental health, or take certain other specifically defined and narrowly conscribed actions.  This framework could be incorporated into a federal privacy bill or in other consumer privacy legislation.

I’ve written elsewhere that one of the most striking aspects about the role of the digital platform environment is that we are all effectively living in an unregulated social science experiment – that is, we’re all serving as the guinea pigs for tech companies who study how we react and behave when presented with various kinds of technology-driven stimuli.  If a university or hospital or government agency were to carry out this research, it would be carefully conscribed by the ethical guidelines first established in 1979 in the Belmont Report.  Under these ethical guidelines, before any human subjects research can begin, researchers generally must assess whether there is risk of harm, inform individuals about that risk, seek informed consent where possible, and provide treatment if harm results.  Before research projects can begin, regulations in the United States and dozens of countries around the world generally require review by an Institutional Review Board or a similar ethics review committee that has the power to prohibit the research or to require that the research approach be modified to mitigate the harms to participants.

Within the U.S. legal framework, the Belmont Report’s ethical principles are mandatory for entities that receive federal funding – which, clearly, Facebook does not.  However, Congress could consider incorporating some of the core principles of the ethical guidelines for human subjects research rules into new protections for children and vulnerable adults. Congress could make clear that companies whose behavior runs afoul of those principles are engaging in unfair (and, in appropriate cases, deceptive) practices. Congress could also explore ways to create incentives for companies to engage in behaviors that will mitigate those harms. And it could clearly articulate the scope of non-economic harm that gives rise to a private right of action for plaintiffs to sue companies like Facebook in federal court when those companies knowingly engage in behavior that results in online harms.  While each of these goals would have to be accomplished in ways that are consistent with the First Amendment (a subject that would make this article much longer), there’s no question that Congress could take some meaningful steps in these areas that would be unlikely to raise legitimate First Amendment concerns.

Lastly, with the blueprint of information and issues identified in these hearings, other actors can step in to hold Facebook to account.  These include not just the Federal Trade Commission and state attorneys general, but also the plaintiffs’ attorneys who will surely be looking for legal theories – either under current legal frameworks or new ones – to seek redress for the thousands of Instagram users who have fallen victim to the platforms’ harmful effects.

Beyond Finsta Memes and Toward a Future Regulatory Environment

Back to the finsta memes.  For the record, the lampooning of Sen. Blumenthal’s finsta question is a bit unfair, when viewed in context.  By the time he made his widely-replayed gaffe, he had already explained how his own office had created a fake account (a finsta) for research purposes and noted how the platforms were aware of fake accounts. The hearing had already seen Facebook’s Davis repeat talking points indicating that the company’s primary philosophy on these issues was to allow parents to have insight into and control over their children’s activities on Instagram.  The paradox, of course, is that parents clearly can’t monitor or control the secret, fake – finsta – accounts that they don’t know about.  Thus, to the extent that the company is aware of widespread use of finsta accounts, its talking points about parental control and insights seem disingenuous at best.  Blumenthal clearly had this right: “Facebook claims it’s giving tools to parents to help their kids navigate social media and stay safe online.”  But, he continued, “Your marketers see teens with multiple accounts as … [a] ‘unique value proposition.’  We all know that means Finstas.  You’re monetizing kids deceiving their parents.”

There’s little doubt that this story, and the scrutiny that the “Facebook Files” is prompting, seem unlikely to go away any time soon.  The Senate Commerce Subcommittee will be holding another hearing on Tuesday, featuring testimony from the Facebook whistleblower whose efforts brought the documents to light – which I will address in Part 2 of this series. Part 3 will cover the Committee’s Wednesday hearing on data security.

Image: Tom Brenner-Pool/Getty