The public’s trust in social media platforms is swiftly eroding – the 2018 Edelman Trust Barometer reports that global trust in social media stands at just 41%, with a year-on-year drop of 11% in the United States. Against this troubling backdrop, a contentious public debate has emerged regarding the platforms’ future, one which has increasingly turned toward calls for regulation. It is now impossible to ignore the unprecedented level of influence wielded by social media companies. They design the algorithms responsible for curating the material that billions of people consume every day, unilaterally decide which forms of legal but objectionable content are taken down or left up, and determine which sensitive information about users advertisers should be able to leverage in targeting advertisements. As private enterprises, their decisions are subject to a minimal degree of oversight and control on the part of the public and its representatives. In short, they wield enormous power without accountability – and there has been a lot to account for.

The scandals and controversies faced by social media companies in 2018 reflect two major problems. First, that platforms’ aggressive collection and handling of user data often implicates very significant – even unprecedented – privacy and security concerns. Facebook, for example, came under fire for allowing the shadowy political consultancy group Cambridge Analytica to harvest data from millions of users under the guise of academic research – a misstep which landed Facebook founder Mark Zuckerberg in front of Congress – and Google moved up the timeline for the shutdown of its Google+ social network following multiple data leaks.

Second, that platforms have failed to fully recognize the responsibilities that accompany their expansive powers over determining the content to which users are exposed. Twitter – among other platforms including Spotify, Youtube, Apple, and Facebook – faced controversy for the unilateral nature of its decision to ban conspiracy theorist Alex Jones, even as many applauded the move. And governmental inquiries into foreign actors’ use of disinformation to manipulate public opinion and undercut democratic processes through social media continued, including an inaugural “International Grand Committee on Disinformation” held in London this November (this time, Zuckerberg didn’t show up, despite a request from lawmakers from nine countries).

We are starting to make meaningful progress toward addressing the first problem. 2018 saw some of the first real action toward regulating how big technology companies deal with user data – at least in Europe. In March, the European Union implemented the General Data Protection Regulation (GDPR), a sweeping set of reforms which have already substantially reconfigured how businesses handle, share, and sell user data. The GDPR creates new standards for reporting data breaches, binds companies to a stringent user consent framework, and gives individuals more control over how their data is represented and shared. Perhaps most importantly, the GDPR invests regulators with the power to levy enormous fines – potentially running billions of dollars for the likes of Google and Facebook – against companies which violate its mandates. India, which is home to almost 500 million internet users, has followed suit, and is moving toward an expansive data protection bill of its own. No viable legislation of comparable scope has emerged in the United States, which has almost no broadly applicable data privacy regulations (though specific rules do exist in industries like banking). Even so, the size and influence of the European market means that many US business have had to adapt to the GDPR regardless. And California lawmakers have taken action at the state level, passing the Consumer Privacy Act – legislation which provides some GDPR-style rights and protections to Californians.

But data protection standards alone aren’t enough to make social media companies accountable – they can’t solve our second problem. What makes social media companies so enormously influential – and, in some instances, so dangerous – is their ability to determine what users see and when they see it. The problems with this largely unrestricted power don’t dissolve in the presence of robust user consent frameworks and privacy-driven data handling architectures. And it’s tough to hold social media companies accountable when they neglect to wield it responsibly, even on a disastrous scale.

Take, for example, disinformation campaigns orchestrated on Facebook and Twitter by Russian agents hoping to manipulate the outcomes of United States elections. Russian intelligence didn’t have to rely on privacy and security flaws. Disinformation worked because it was optimized to reach vulnerable audiences, and because the platforms often treated it like any other form of content – cat videos, say, or articles from the New York Times. Two Senate Select Committee on Intelligence reports released in mid-December highlighted the scope and complexity of these operations, as well as the limited nature of technology companies’ cooperation with government agencies.

Facebook and Twitter at first simply sat back and watched their systems spread whatever content (however dangerous) triggered the right algorithms. Today they appear to be developing proactive content moderation that takes sides against certain pages and publishers – even at the cost of controversy and allegations of censorship. But the policies governing content moderation and the tools with which it is carried out have been – and will likely continue to be – developed and deployed largely behind closed doors, with limited public input. And outside of bad PR, the platforms have not borne the costs of abusive user behavior – provisions like Section 230 of the United States’ Communications Decency Act insulate social media platforms from most liability relating to user-generated content. Though, as Eric Goldman writes, CDA 230 itself has faced a number of recent reductions in its scope – and more dramatic change may be on the horizon, as US and UK lawmakers including Senator Mark Warner (D-VA) call for less forgiving platform liability standards.

So if the progress we’ve seen around data privacy and security standards in 2018 – at least in Europe – isn’t enough to solve social media’s problems, what would be? The last year has provided some hints as to what a more comprehensive, accountability-focused regulatory solution might look like.

First, some social media platforms have taken steps toward greater transparency and more robust self-governance. In October, Twitter released a massive dataset listing accounts and content it had linked to Russian disinformation efforts. A month later, Mark Zuckerberg released a lengthy essay describing Facebook’s efforts to ensure better transparency and policymaking around content moderation practices, including a plan to explore the possibility of independent oversight by what Zuckerberg has called a Facebook “Supreme Court.” But while these steps show promise, they might be too little, too late – and platforms definitely aren’t willing to hand over the reins to regulators and critics. The solutions that Facebook and its peers implement will almost certainly be incremental – carefully calibrated tweaks which protect their core business models. And with public trust in social media running low, quicker and more decisive action might prove necessary.

Second – and on the opposite side of the spectrum – some have called for aggressive antitrust action to break up Facebook, which currently holds a dominant market position. Columbia Law School’s Tim Wu recently argued that the enormous size of today’s tech companies is incompatible with healthy, competitive markets, and that this lack of competition has bred stagnation. And 2018 has seen senior political figures in Europe and the United States refer to the possibility of antitrust action against social media companies, claims which have been lent further credibility by a landmark $5 billion EU antitrust fine against Google this July. Following a regulatory mandated corporate breakup, proponents argue, discontented Facebook customers would have the opportunity to vote with their feet, leaving for one or more of a dramatically increased number of competing platforms. Under a more competitive market, platforms might have the opportunity to differentiate themselves on the basis of responsible pro-consumer practices – and could be swiftly punished by users for mismanagement of the kind we have recently witnessed. Of course, there is little denying that antitrust would be a dramatic solution, one which would doubtless be met with staunch resistance from industry, many policymakers, and a substantial section of the public – not to mention Facebook’s formidable legal and political apparatus.

A third and more moderate solution – originally championed by Jack Balkin and Jonathan Zittrain, and recently introduced by Senator Brian Schatz (D-HI) as the Data Care Act of 2018 – might involve assigning new, legally binding “fiduciary” duties to technology companies which handle user data. Traditional fiduciaries like doctors, lawyers, and accountants owe their clients a duty of loyalty, and are barred from furthering their own interests at the expense of those of the client. We trust social media companies to handle our sensitive data and to determine which content we see – indeed, this trust is what enables them to do business – so we might reasonably expect them to be bound by a comparable set of duties. While it might not, for example, be illegal for platforms to expose vulnerable users to disinformation or to allow advertisers to hawk whiskey to alcoholics, such practices represent obvious abuses of trust. By legally formalizing intuitive duties owed by platforms to their users, a fiduciary framework could provide leverage for individuals and regulators looking to hold platforms accountable for failures that are presently beyond redress.

2018 has been a year of escalating threats, intense debate, and, to some extent, meaningful regulatory action. The thesis of future efforts to make social media companies more accountable should be that abdication is no longer an option – that today’s platforms are responsible for the safety and wellbeing of their users to an unprecedented extent. There’s no room to be hands-off at a time when partisanship, information quality threats, and surveillance capitalism are reaching epidemic proportions. Data protection regulations are a good first step – one we still have yet to take here in the United States – but it will take something far more creative (and likely more controversial) to get us across the finish line.

 

Photo credit: Facebook CEO Mark Zuckerberg arrives to testify before a joint hearing of the US Senate Commerce, Science and Transportation Committee and Senate Judiciary Committee on Capitol Hill, April 10, 2018 i(Jim Watson/AFP/Getty Images)