Whistleblower Frances Haugen’s allegations about research by Meta (née Facebook) raise profound concerns about risks to security and public welfare from interactions over social media platforms. Those lessons can be seen in particular findings involving teen mental health, which underscored a set of conclusions that many internet users intuitively suspect may be true. First, that social media, smart devices, and a host of other data-driven apps, services, and devices are reshaping our everyday behavior, opinions, and moods in ways that are having a profound impact on individuals and on society as a whole. Second, that our collective understanding of those forces remains incomplete.  Third, that when we’re considering whether to sign up for any particular app or service, our ability to forecast what those consequences might be is murky at best. And fourth, that the legally-compliant privacy notices we’re presented with when opting into a service or product are so mechanical and opaque that they do little to help us make meaningfully-informed decisions about the use of our data.

As policymakers around the world scrutinize data-intensive technologies, one way to mitigate these technologies’ harms to individuals and society is to reform the law so that companies must provide more meaningful information in their privacy notices and terms of service. Adapting the existing ethical rules for informed consent in human subjects research gives us a way to do that. Where privacy and data protection law today generally requires companies to disclose information about the mechanics of data usage (what information is collected and how it’s used and shared), this new approach would require corporations also to disclose the impact of those practices on the people whose data is being used.

To explain why this shift in approach could be so powerful, some additional context and an example are helpful. 

Currently, most privacy and data protection laws around the world rely heavily on concepts of notice-and-consent. This framework is largely policy-neutral and contingent on the assumption that adult individuals are able to provide informed and meaningful consent to particular data uses, including the terms of an online privacy notice, so long as those notices explain some basic information about the mechanics of data collection and use, such as what types of information the entity is collecting, how they’ll use it, and who they might share it with. It’s widely acknowledged that these consent forms are seldom read; even when read, they are often difficult to understand. Their effectiveness is further limited by the dominance of a few mainstream tech platforms, meaning that users have few equally effective alternatives to the products in question. Most notably, under these notice-and-consent requirements, companies are seldom required to explain what the impact of their data practices might be.

This approach gives corporations considerable leeway not only to collect and analyze data for advertising and other revenue-generating purposes, but also to use that data to carry out what amounts to unregulated behavioral science research and experimentation.  This “research” takes multiple forms: building detailed individual profiles that characterize our demographics, personality traits, interests, inclinations, and more for the purposes of targeted digital advertising; evaluating our behavior online to increase engagement and time spent on screen; assessing our emotional state; making predictions about our potential medical conditions; influencing our political, social, and cultural views or our moods; waging information operations to gain geopolitical advantage or to advance a foreign policy agenda; and more.  In other words, companies can create detailed user profiles not only to understand individuals, but also to influence our mindset, shape our attitudes, manipulate our emotional states, change our behavior, and impact society as a whole.

The good news: the world has a half-century of experience in applying stringent ethical requirements to human subjects research, and those standards can be adapted for commercial data-driven technologies in ways that mitigate online harms.

In 1974, the U.S. Congress established a blue ribbon commission to address grave and politically fraught concerns over medical research, including debates over the use of stem cells and the shameful history of the Tuskegee Institute’s syphilis experiments.  The resulting Belmont Report set forth ethical principles for human subjects research that eventually become both widely adopted and, for federally-funded research, legally binding in regulations known as the Common Rule

The Common Rule requires researchers to abide by three foundational ethical principles: respect for persons, beneficence, and justice. It also requires researchers to obtain meaningful informed consent from research subjects (or their authorized representatives) in nearly all cases. The Common Rule imposes more stringent requirements for informed consent than most privacy and data protection laws, requiring disclosure of risks to participants, emphasizing the holistic nature of what must be disclosed, and prohibiting researchers from requiring participants to waive any of their legal rights. Specifically, when seeking consent, researchers must inform participants about the purposes and duration of the research. They must also provide: a description of the research procedures to be followed; a description of any reasonably-foreseeable risks or discomforts to the individual; a description of any reasonably-expected benefits to the subject or others; a disclosure of any existing alternatives that might be advantageous to the subject; an explanation of what compensation and treatments are available to subjects for research involving more than minimal risk; a statement that participation is voluntary and may be discontinued at any time; and an explanation about how personal information will be handled.  In other words, researchers must notify research subjects not only of what information will be collected and how it will be used (i.e., the mechanics) but also what the likely impact of those uses will be.

Frances Haugen’s revelations about Instagram and teen mental health – the increased rates of anxiety, depression, suicidal ideation, and eating disorders, coupled with allegations that the platform rejected researchers’ recommendations for changes that would lessen those harms –  were, in the words of Sen. Richard Blumenthal and Sen. Ted Cruz, a “bombshell.” According to the whistleblower complaint filed with the Securities and Exchange Commission (SEC), Facebook’s internal research concluded that 13.5 percent of teen girls on Instagram say the platform makes thoughts of “Suicide and Self-Injury” worse. According to the Wall Street Journal’s report, Facebook’s internal research slides declared, “Teens blame Instagram for increases in the rate of anxiety and depression. This reaction was unprompted and consistent across all groups.”

Despite these grave and known harms, Instagram’s existing Data Policy focuses almost entirely on describing how the app can use the extensive information it collects, emphasizing uses that “personalize and improve” the product. The only significant mention of potential harms comes under the heading of legal requests and cooperation with law enforcement.

In contrast, under an impact-oriented approach to consent, Instagram’s Data Policy and Terms of Use would state something like this:

By using our platform, you acknowledge that we will analyze your activities both on and off this platform to understand your behavior and what it indicates about your interests, activities, relationships, personality, shopping habits, and the status of your physical and mental health. We may ask you to participate in online surveys and we may conduct experiments to see how your mood and mental health are affected by the time you spend on our platform and to changes we make in it. 

Users of our platform, are likely to experience increased rates of anxiety, depression, and feelings of inadequacy and social isolation. There’s a 1-in-3 chance that you’ll have increased body image issues, a 25 percent chance you’ll start doubting the strength of your friendships, and a 40 percent chance that you’ll start feeling unattractive if you spend time on this platform. Users face an increased risk of eating disorders and suicidal ideation.  Despite these harms, users may find themselves unable to log off, even if they want to limit their time online or quit.

We might identify ways to reduce those negative impacts associated with platform use, but we reserve the right to determine whether those changes are consistent with the company’s best interests and to prioritize business practices that maximize the number of users on our platforms, the amount of time spent on the platform, virality of content, or other factors that support growth of user base, revenue, or profits.

Much like the Surgeon General’s warning on a pack of cigarettes, those would be meaningful disclosures to include in a privacy and user consent notice.  

More broadly, given the scope of alleged harms across this one platform alone, policymakers and their constituents are right to ask: Are we satisfied with the current balance of power between individuals and the companies that operate data-intensive businesses? And if not, how could privacy law be changed to address individual and societal harms that result when corporations carry out what amounts to unregulated social science research and experimentation? 

When Congress established the Belmont Commission in 1974, it did so in part because it recognized that the issues raised by human subjects research were scientifically complex and politically fraught. A bipartisan commission could delve deeply into the issues in a fashion that would be difficult for lawmakers who were preoccupied with a host of pressing issues, and Congress could then review and decide whether to act on the recommendations of the commission’s in-depth analysis. Today, Congress could take this opportunity to establish a new blue ribbon, bipartisan commission that is charged with examining the key risks and harms to individuals, communities, and democratic institutions from the commercial use of personal information. The commission could articulate a set of ethical principles to govern the use of personal data and offer a full range of recommendations which could include, among many other things, an examination of whether and how the ethical principles for informed consent that originated with the Belmont Report might be adapted to mitigate the harms that result when personal information about our lives online is handled in ways that amount to social science research and behavioral experimentation.

Image: Former Facebook employee and whistleblower Frances Haugen testifies during a Senate Committee on Commerce, Science, and Transportation hearing entitled ‘Protecting Kids Online: Testimony from a Facebook Whistleblower’ on Capitol Hill, October 05, 2021 in Washington, DC. Haugen left Facebook in May and provided internal company documents about Facebook to journalists and others, alleging that Facebook consistently chooses profit over safety. (Photo by Jabin Botsford-Pool/Getty Images)