“The era of self-regulation for online companies is over.”

These words, from the United Kingdom’s Digital Secretary Jeremy Wright as part of a paper released on Dec. 15th, might be the single clearest articulation of the changing winds for data-intensive, and of the prevailing mood of technology regulators on both sides of the Atlantic heading into 2021.

Platform providers – and major social media platforms in particular – were already facing the prospect of increased regulation as 2020 drew to a close, as detailed further below. The events at the U.S. Capitol building on Jan. 6 have significantly ratcheted up those pressures: There is growing consensus that the algorithms and business models of social media platforms had the effect of amplifying the extremist voices calling for the nullification of the presidential election results. Much like conspiracy theories served as the accelerant that ignited violence at the Capitol Building, the national shock and outrage at witnessing those events is accelerating calls for Congress to regulate the companies that are at the heart of those information – and disinformation – ecosystems. The impacts will be wide-ranging. They include a growing chorus of voices calling for repeal of the liability protections that social media platforms enjoy under Section 230 of the Communications Decency Act, as well as calls for imposing regulatory standards for content moderation and holding companies accountable for the ways in which their algorithms prioritize content that gets recommended to users. Each of these topics are deserving of an article of their own. In the meantime, a number of other regulatory moves were afoot before the Capitol Building riot; we shouldn’t lose sight of those, as they are likely to play important roles in the overall landscape of technology regulation in 2021.

Over the past 20 years of technology innovation – including watershed moments like Facebook’s release in 2004 and the iPhone’s debut in 2007 – governments around the world have taken a largely laissez-faire attitude as a handful of companies have become dominant leaders in the data-driven technologies that now lie at the heart of everyday life. Consumers around the world have grown accustomed to sharing detailed personal information online, whether that sharing is intentional, as with chatty posts to friends and family on social media platforms, or inadvertent, as is all too often the case when individuals are unaware of the scope and scale of personal data that is being created, collected, and analyzed about them on the multitude of devices, apps, and platforms that serve as the inescapable undercurrent of our lives.

Wright’s comments about online companies came in the midst of what proved to be a trying month for online platforms doing business in the United States and U.K. Here, I offer a brief recap of those key events, as well as some thoughts about what we can expect to see in 2021, including the ways in which the now-likely shift to Democrat control of the Senate could impact impact these trends.

The Facebook Antitrust Complaint

The first big development out of the gate in December was the complaint filed by the Federal Trade Commission (FTC) against Facebook, charging it with anticompetitive practices. The complaint charged the social media behemoth with unlawfully squelching competition through its purchases of Instagram and WhatsApp and through the restrictions Facebook imposes on third party app developers, who were only permitted to connect to the platform if they agreed not to create any online services that might compete with Facebook, and agreed not to share any information relating to Facebook users with Facebook’s competitors. The complaint came after a lengthy investigation by the FTC and dozens of state attorneys general, and heralds a bold, new step in regulatory action against the mammoth platform that boasts nearly 3 billion users around the world.

The 53-page complaint details the attitude Facebook leadership reportedly has toward competition: that it’s better to buy than compete, as evidenced by the purchase of rival startups, including WhatsApp and Instagram; and that the platform could use its unprecedented reach to squelch competition by setting rules that would prevent the apps that plugged into Facebook from evolving into competitors.

One of the striking aspects of this complaint is that the FTC’s Bureau of Competition lawyers so clearly understand the mechanics of the platform, and many of the technology policy issues at stake. This is no small matter: The series of congressional hearings into social media platforms has all too often served to illustrate the extent to which lawmakers struggle to understand the social media ecosystem. (Perhaps the most famous example came in April, 2018, when Facebook CEO Mark Zuckerberg testified in front of the Senate Judiciary Committee. Democratic Senator Bill Nelson of Florida asked why he sees ads for chocolate in his Facebook feed, and Republican Senator Orrin Hatch of Utah asked whether and how the platform would remain free for users, to which Zuckerberg replied, “We run ads.” Hatch was panned for seeming not to understand the mechanism that lay at the core of the online business model, and Zuckerberg’s facial expression was interpreted as a smirk after he delivered those three words.)

The FTC’s complaint presents a real threat to Facebook’s business model: The FTC is seeking a permanent injunction in federal court that could require the company to sell off WhatsApp and Instagram; prohibit Facebook from imposing conditions on developers of other apps that connect to Facebook; and require Facebook to seek prior notice and approval for future mergers and acquisitions. The FTC – and the industries it regulates – are keenly aware of the leverage that long-term consent decrees create. This mechanism is routinely used as part of the settlement agreement framework in cases relating to data privacy and security, and to great effect: A company subject to a 20-year consent decree must endure long-term scrutiny from regulators and meet specific obligations that are often more stringent than existing privacy laws. In the antitrust context, if Facebook finds itself – whether through voluntary settlement agreement or pursuant to court order – subject to an obligation to submit its future business acquisitions for FTC review, the company could find that a key component of its growth strategy has been hobbled. Being forced to divest WhatsApp or Instagram would be a short-term blow; being restrained from future acquisitions, and having to demonstrate that they will not have an anti-competitive effect, could prove an existential threat to what has, for 15 years, been Facebook’s record of meteoric growth.

The FTC’s complaint is clearly focused on harms to competition; according to Ian Conner, director of the FTC’s Bureau of Competition. “Facebook’s actions to entrench and maintain its monopoly deny consumers the benefits of competition. Our aim is to roll back Facebook’s anticompetitive conduct and restore competition so that innovation and free competition can thrive,” Conner said in a video statement. Despite this focus, the complaint hints at privacy concerns, noting that Facebook’s extraordinary commercial success has been driven by its ability to use proprietary algorithms to target advertising based on the “vast quantities” of data the platform has on its users. The complaint also notes that if there were greater competition in social media, benefits to users could include expanding the “availability, quality, and variety of data protection privacy options for users, including but not limited to, options regarding data gathering and data usage practices.”

Perhaps it isn’t surprising, then, that within a few days of filing its antitrust complaint, the FTC announced that it was launching an inquiry into the privacy practices of online platforms.

Although the results of the Georgia run-off elections won’t be certified until January 15th, it currently appears likely that both Democrat candidates will win their races, shifting control of the Senate to the Democrat party. This would give Democrats control of the legislative agenda, including the committees responsible for antitrust regulation. Given that former presidential candidates Elizabeth Warren and Amy Klobuchar highlighted concerns relating to the power and size of big tech during their campaigns, this could be a major shift. Klobuchar would be in line to chair the Senate Subcommittee on Antitrust, Competition Policy, and Consumer Rights. And Facebook founder Mark Zuckerberg famously noted in an internal company call that an Elizabeth Warren presidency would be an “existential threat” and that he would “go to the mat” to defeat any attempts she might lead to break up big tech. Warren’s response: it isn’t just Facebook that faces antitrust risk, but Google and Amazon as well, as it’s “time to #BreakUpBigtech.” Clearly Warren won’t be president – but if Democrats control the Senate, she, like Klobuchar, could have a powerful role in setting the agenda for antitrust hearings and legislative proposals in the 117th Congress.

FTC Study of Social Media and Streaming Privacy

In a Dec. 14 statement, the FTC announced that it was launching a study of the privacy practices of major social media and video-streaming services, including Facebook, Whatsapp, Snap, Twitter, YouTube, ByteDance, Twitch, Reddit, and Discord. The study is authorized under the FTC’s broad investigative powers, and the notice, issued by three of the FTC’s commissioners, notes their concern that, despite their “unavoidable” role in modern life, “the decisions that prominent online platforms make regarding consumers and consumer data remain shrouded in secrecy.” Questions abound regarding what data is collected and how, and how platforms leverage and sustain our attention – how, in the vernacular, they keep our eyeballs on-screen. “It is alarming,” the commissioners note, “that we still know so little about companies that know so much about us.”

The information required under the Order that accompanies the study notice is wide-ranging. It includes a number of concrete data points that the companies should have no difficulty in providing, such as user counts (total users, average daily users, average monthly users), usage statistics (numbers of posts, engagements, comments), advertising statistics (numbers of ads, revenue), and financial data (costs, revenue, and profit margins for various areas of operation). Although this information should, by and large, be relatively straightforward for social media and streaming companies to compile, it will – if the companies comply – likely prove illuminating nonetheless, as this study would mark the first time that a federal regulator has had such a comprehensive view of the social media and video streaming ecosystem.

The study’s information request goes farther than these mere statistics, however, and a number of its questions get to the heart of – or at least nibble around the edges of – key questions relating to online platform usage. In asking about each user attribute that the company “uses, tracks, estimates, or derives,” the FTC may get a glimpse into the types of algorithms these platforms are using for behavioral prediction, personality assessment, and other approaches to understanding their users or shaping their activities. In requiring reports about fake and unauthorized accounts, bots, and inaccurate information, and advertising tied to those accounts and content, the FTC is demanding that the platforms do the hard work of providing data that will illuminate the scope of the online disinformation problem. In asking the platforms to articulate “the value of user to the Company (e.g., dollar value),” the FTC is placing responsibility squarely on the platforms for doing an assessment that has eluded privacy researchers and economists: determining (at least one measure of) what privacy is worth. The study asks for information about the types and uses of algorithms, the mechanisms for retaining user engagement and determining what content to display, the company’s approach to content moderation and content promotion, uses of demographic information, competitive pressures and strategies employed by the platforms, and a set of questions specifically relating to platform usage by children and teens.

The 53 questions, detailed across 21 pages, cover a wide range of information that goes to the heart of the online platform business model. Needless to say, not everyone is going to be a fan of this level of scrutiny. The companies targeted by this study are sure to push back, attempting to narrow the scope of the questions as far as they are able. And FTC Commissioner Phillips dissented from the study notice, criticizing the study’s approach as an “undisciplined foray into a wide variety of topics.” The heart of his concerns: the companies are dissimilar, and the questions seek voluminous information – the attempts to obtain, review, and assess that information will be too draining on the FTC’s limited resources and divert it from other work, and some of the questions stray too far afield from the stated purpose of “consumer privacy.”

Although Commissioner Phillips is correct that the requests are wide-ranging, the implied attempt to silo them into distinct categories – treating content moderation separately from advertising, treating disinformation as separate from behavioral prediction algorithms, treating user engagement as separate from data privacy practices – illustrates why achieving sensible online regulation has been so difficult so far, and why it will continue to fail if Congress, regulators, or state legislatures continue to take a siloed approach to these inherently linked dimensions of the multifaceted social media world.

The FTC’s work will continue regardless of who controls the U.S. Senate. However, the FTC’s examination of social media platforms could be echoed by Congressional review and perhaps new legislation if Senate control passes to the Democrats. Although both parties have raised concerns about social media platforms, they have approached these issues very differently, with many Republicans alleging anti-conservative bias in social media platforms while Democrats are more focused on consumer protection issues and on the spread of disinformation. Democrat leadership of key committees like Commerce could reshape the focus of proposed privacy legislation, impacting everything from review of liability protections for social media platforms to consumer protection of user data and decisions about issues like federal pre-emption – that is, whether a federal privacy law, if passed would set a floor for data privacy protections (allowing states to pass more stringent laws) or a ceiling (barring states from requiring companies to meet stricter standards than those set under federal law).

As Congress struggles to define federal privacy legislation that spans multiple committees’ jurisdiction and continues to grapple with whether and how to reform the liability protections that online platforms receive under Sec. 230 of the Communications Decency Act, the FTC’s broad approach to studying the full range of interrelated problems in social media and video streaming is the right one: The broad-based information gathering will put the FTC in a better position to understand the nuances of the relationships between these and other problems and to inform the public and make sensible proposals as a result.

This broad-based approach finds resonance in the U.K.’s announcement, made the same week, that it was planning on moving forward with a comprehensive approach to online harms.

U.K. Legislation Regulating Online Harms

The U.K. first announced online harms legislation in the spring of 2019 with a white paper describing a bundle of measures intended to address the internet’s role in critical issues bring about societal harm, with a focus on child sexual exploitation and abuse, online sales of illegal drugs, and the use of online platforms by terrorist groups and gangs to recruit new members and radicalize followers. In the 18 months since then, the U.K. government has undertaken a consultation period – receiving and reviewing comments and feedback – and on Dec. 15, issued its final response to the consultation and its plans for a way ahead.

The lengthy government analysis articulated the competing values: fostering innovation and expression, while curbing the deleterious effects of the technologies that are an integral part of everyday life for most UK residents. In this final report, the government reiterated its 2019 concerns: cyber bullying and child exploitation, online radicalization by terrorist groups, and use of the internet for illegal activity. It also highlighted another concern that had not been a focus of the 2019 release: the corrosive impact of disinformation campaigns fomented on social media. Specifically,

There is also a real danger that hostile actors use online disinformation to undermine our democratic values and principles. Social media platforms use algorithms which can lead to ‘echo chambers’ or ‘filter bubbles’, where a user is presented with only one type of content instead of seeing a range of voices and opinions. This can promote disinformation by ensuring that users do not see rebuttals or other sources that may disagree and can also mean that users perceive a story to be far more widely believed than it really is.

The paper laments that voluntary approaches within the tech sector have been inconsistent and insufficient. The solution: a multi-pronged approach of government regulation to address online harms, with a particular eye toward issues affecting national security and the welfare of children. The goal: a “coherent, proportionate and effective approach that reflects our commitment to a free, open and secure internet.” The mechanics: establishment of a government regulatory body that will promulgate codes of practice relating to online harms, able to coordinate as necessary with law enforcement agencies; requirements for online companies to provide regular transparency reports, allow independent researchers to access platform data, improve mechanisms for addressing user complaints; and establishing an independent review mechanism. The government intends to focus on platforms that allow sharing of user-generated content, and makes clear that it intends to focus on the largest platforms first. Finally, the government will also undertake an extensive online literacy program, with the goal of aiding U.K. residents in better understanding the perils of the internet and how to navigate it safely, taking full advantage of the best it offers while protecting themselves from online harms.

The release of this final consultation has gotten relatively little attention, overshadowed by the worsening pandemic, Brexit, and other urgent news. And, a great deal remains to be seen: Until the regulatory body is established and begins providing substantive guidance, it will be hard to estimate the true impact of this online harms initiative or how it is likely to balance the range of competing interests. But it bears watching: partly because the UK appears poised to wade more deeply into the waters of online regulation than other countries have to date, and partly because the concerns expressed – and the success or failure the U.K. has in tackling them – may have real resonance for similar measures in the United States.


These three developments, all within the span of a week at the end of a particularly chaotic year, flew under the news radar for many people on both sides of the Atlantic. They’re not even the top stories in data privacy and online regulation, as Sec. 230 reform continues to make top news in the United States; as California moves forward with new regulations for the California Consumer Privacy Act and prepares for the standup of a new regulatory body under the recently enacted California Privacy Rights Amendment; and as organizations of all sizes continue to deal with the fallout of the Schrems II decision that invalidated the Privacy Shield framework for cross-border data transfers from the European Union to the United States.

They are, however, important harbingers of the priorities being defined by regulators, the pressures that online platforms can continue to face in 2021, and of the ways in which what were once distinct threads of legal analysis and policy focus – antitrust, data privacy, online radicalization, cyber bullying, child protection, behavioral economics, and more – are starting to converge. The major online platforms have, until now, reaped enormous financial reward from the stovepiped approaches to dealing with these issues. Going forward, their convergence may signal liability exposure and economic woes for the platforms. But, they may also signal an opportunity for societal benefits and increased protection for individuals by reigning in the ways in which the pursuit of profits has wrought negative consequences and enabled unintended harms.