On Wednesday, Mark Zuckerberg finally ended days of silence and set out on a media tour to explain Facebook’s role in the Cambridge Analytica data scandal. CNN’s Laurie Segall asked him if he was worried about Facebook facing government regulation after what he admitted was a massive breach of trust between the platform and its users. “I actually am not sure we shouldn’t be regulated,” he said. “I think in general technology is an increasingly important trend in the world. I think the question is more what is the right regulation rather than ‘yes or no should we be regulated?’”

It is certainly time for a robust international conversation about how best to regulate social media platforms, and data privacy more generally. Major technology companies — including Facebook, Google, Twitter, SNAP and others — define the information ecosystem in much of the world. Barely regulated and rarely held accountable, these companies are completely transforming the public sphere. While these platforms present new opportunities to connect people around the world, they also create new spaces for bad actors that wish to spread misinformation, encourage terrorism or incite violence, engage in online harassment, steal personal data, restrict free speech and suppress dissent.

As this urgent conversation gets underway, here are some factors to consider when imagining new regulations:

Scale and urgency of the problem. Things are moving very quickly. As Zuckerberg noted in his CNN interview, there are bad actors currently plotting attacks on future elections, and there are new revelations about the nature and scale of past data breaches every day. So the typical cycles for feedback on proposed rules may not work. And yet we must allow for some time for information to be gathered and for debate and discussion to unfold. The government should take steps to hasten deliberations, including bringing in diverse perspectives from across society and demanding urgent testimony from technology executives..

Diversity of platforms. Every platform is, in some ways, its own world. Even though most of them share some features, each is unique and they are constantly evolving. WhatsApp is different from Snapchat is different from Facebook is different from YouTube. Consequently, it would be pointless to impose on them one single set of rules. That is why principles are most important, especially the principles of transparency and accountability.

Opacity of the platforms. There is a black box nature to today’s technology platforms. We as a society know very little about the inner workings of companies like Facebook and Google. This information asymmetry between the companies and the public poses a significant democratic problem, because they may leverage their position against our attempts to exert control over them.

It should be obvious by now that the United States needs privacy protections modeled on the EU’s General Data Protection Regulation (GDPR), which most of these companies will have to comply with anyway. We believe that to protect citizens, social media regulation should promote these three core principles: data privacy & transparency to users; transparency to government and independent auditors; and responsibility by the platforms for addressing their social costs.

From these principles we propose the following regulatory framework:

1 – Registration with the federal government.

A registration process will provide the U.S. government–our elected  representatives–with an opportunity to confirm that a platform has appropriate protocols in place, as well as make those protocols more transparent. An application to become a registered “Social Media Platform” should include detailed descriptions of:

  • How the platform works
  • Privacy protections
  • Data acquisition, storage, and usage policies
  • Breach disclosure procedures and remedies
  • How the platform prohibits hate speech
  • How the platform manages disinformation (what we define as: the intentional dissemination of false information or advertising to achieve a political purpose)

For each of these areas, the applicant should provide a high-level, executive summary that is intelligible to the average social media user.

Of course this begs the question: What is a “Social Media Platform”? Is YouTube one? Google? Uber? This will be an important question to answer. We think a Social Media Platform should be defined as a web-based application, the primary utility of which is for the user to share user-generated content and make social connections, whether professional and/or personal. A threshold for registration, perhaps based on revenue or the scale of the platform, should be established.

And which federal agency should oversee registration? There are multiple options, including creating a new one. The Atlantic’s Franklin Foer suggests the creation of a “Data Protection Authority,” for instance. We think the Federal Trade Commission, with its core mission of protecting consumers, is a strong candidate. With a new set of FTC chairs on the way in, some believe the commission will be more vigorous in its oversight.

2 – Quarterly Disclosures and Certifications

Once registered, platforms should make quarterly public disclosures, describing any changes to their policies, any security or usage violations, and their response to any such violations. These disclosures should also include a certification, signed by the platform’s CEO, that the platform is in compliance with its legal obligations. To combat accounting fraud, Sarbanes Oxley had CEOs and CFOs personally certify financial statements after the Enron scandal. The idea is that certification gives senior executives skin in the game, and makes them focus on internal controls.

3 – Auditing and access for Independent Researchers

There is likely little political will, and possibly dubious efficacy, in a new government agency conducting on-site examinations of platforms to test compliance. Instead, platforms should make their systems available to commercial and academic researchers–funded and sanctioned by the government–to test their compliance with federal regulations. Part of the cultural DNA of the tech sector is the spirit of transparency and collaboration. This model would leverage that culture. Access to platforms for such purposes should be given irrespective of intellectual property concerns, but systems will need to be in place to manage these concerns — we don’t want more Aleksandr Kogans.

4 – Anti-Fraud Provisions

Just as The Securities and Exchange Act of 1934 provides for civil and criminal liability for fraud in connection with the purchase or sale of securities, so should this framework assign liability to any party who uses a Social Media Platform to commit fraud, including the use of personal information without consent.

5 – Private Enforcement

There is likely also little political appetite in America today for a new fleet of government enforcement investigators and lawyers. But again, the securities laws provide a model. Platform users should be empowered under these new regulations to sue platforms directly for violations. Calculating liability will present a challenge, but we think the number of users on a platform is a relevant metric.

6 – Levies for remediation of social costs

Facebook’s market cap isn’t far from that of ExxonMobil. An industry with companies at such scale produces some form of pollution or other negative externalities. Often, regulators seek to make industry pay for it. While Google and Facebook have each invested in some initiatives designed to address some kinds of externalities– particularly disinformation and their impact on the health of the news media– clearly much more is needed. Governments should explore the appropriate levies to place on these technology companies to fund programs such as advancing media and information literacy and diversity, and sustainability of the news media ecosystem, for instance. Look at what the EU High Level Expert Group on Fake News is proposing: the creation of R&D centers and a Center of Excellence to study these issues, as well as programs to bolster independent media and journalism, and make media literacy a part of the EU’s common school curriculum. Technology companies should help pay for similar initiatives in the United States.

Regulation will not solve all of today’s problems — it never has — but it can meaningfully reduce the threats that social media currently presents to personal privacy and indeed to national security. And, it can help defend against future threats, as new technologies for generating propaganda computationally merge with new techniques for targeting users. We do not need to reinvent the wheel. There are proven models that work and that present minimal costs to government.

To borrow a couple of slogans from the tech sector: Done is better than perfect. Let’s ship it.

Image: Justin Sullivan/Getty