(Editor’s Note: This article was cross-posted with Tech Policy Press here).

A collaboration between social scientists and Meta has been held up as a potential “new model for platform research” that may help explain the effects social media companies have on politics and democratic institutions.  With the first results from this ongoing project – four peer-reviewed studies – released last week, now is a good time to ask whether the unusual endeavor is living up to the hype. 

There are reasons for skepticism. 

The project is expected to produce at least 12 additional studies, so it is too soon for definitive conclusions. But while Meta gave outside researchers unprecedented access to platform data for a period of time around the 2020 election, the company’s self-serving distortion of the results illustrates an important weakness of any “collaborative” venture tied so closely to the subject under study. In this instance, Meta’s obfuscation appears to have succeeded: Mainstream media headlines and coverage of the research results often echoed the company’s line.

Policymakers in the United States, European Union, United Kingdom, and elsewhere are debating – and in the EU, beginning to implement – measures intended to shed more light on the systemic risks of social media and, more broadly, the effects of technology on democracy. The efficacy of the groundbreaking Meta studies ought to inform choices legislators and regulators make between voluntary corporate-academic collaborations and government-imposed disclosure requirements. The Meta studies, in our view, highlight the fallibility of corporate collaboration and the need for government transparency mandates.

A Focus on Polarization

The four initial Meta studies – three published in the journal Science, one in Nature – reflect the careful work of leading academic researchers from New York University, Princeton, Dartmouth, Stanford, and other prominent schools. The findings are numerous, nuanced, and not easy to summarize in simple or dramatic terms. Three of the four studies found, among other things, that subtle adjustments to Facebook’s algorithms or other features did not significantly reduce users’ levels of partisan polarization during a three-month period around the 2020 election. As one of the authors, Dartmouth political scientist Brendan Nyhan, told Science, such findings suggest certain interventions proposed to address problems on social media may be less promising than some people  hoped. 

But in a widely quoted corporate blog post, Nick Clegg, the former British politician who is Meta’s global president and number-two executive, went much further. Clegg asserted that the four studies “add to a growing body of research showing there is little evidence that key features of Meta’s platforms alone cause harmful ‘affective’ polarization or have meaningful effects on these outcomes.” (Affective polarization describes partisan animosity that transcends disagreement on specific issues.)

Clegg’s statement echoed congressional testimony by his boss, Meta founder and chief executive Mark Zuckerberg, in the wake of the January 6, 2021, attack on the U.S. Capitol. Then and now, this argument is misleading in at least three ways. 

Meta’s “Straw Man” Argument 

First, it is a classic “straw man” argument, knocking down a position embraced by no serious analyst of social media companies. No informed observer contends Meta’s main platforms alone have caused rising levels of harmful polarization. Our own analysis, and the empirical work of many others, including some of the researchers involved in the current Meta collaboration, point to multiple causes of partisan hatred in the United States. These other forces include long-term realignment of the Republican and Democratic Parties, hyper-partisan talk radio and cable television, and the uniquely divisive role that former President Donald Trump has played in public affairs. Social media is not the original or main cause of increasing divisiveness, which began decades before the invention of today’s digital platforms in the 2000s.

But contrary to Clegg and Zuckerberg’s contentions, a range of experts have concluded that widespread use of social media has exacerbated preexisting partisan animosity. In an October 2020 essay in Science, a group of 15 researchers, some of whom are co-authors of the more recent studies, wrote: “In recent years, social media companies like Facebook and Twitter have played an influential role in political discourse, intensifying political sectarianism.” Reinforcing the point, a separate quintet of researchers summed up their review of the evidence in an August 2021 article in Trends in Cognitive Sciences: “Although social media is unlikely to be the main driver of polarization,” they concluded, “we posit that it is often a key facilitator.” Nothing in the new studies contradicts these conclusions.

Oversimplifying Findings 

Apart from employing a straw man fallacy, a second way that Clegg distorted the latest findings is by vastly oversimplifying them. Illustrating this requires some specifics. 

One of the new studies looked at the effect of replacing Facebook’s existing content-ranking algorithm, which prioritizes posts likely to elicit user engagement, with one that ranked content in strict reverse-chronological order. This experiment didn’t significantly alter polarization levels, as measured by an after-the-fact survey. But the study, like the others released last week, measured changes over only a three-month period around the November 2020 election. It is altogether possible – perhaps even likely – that by the fall of 2020, after a rancorous campaign, people’s polarization levels were already well established and unlikely to vary based on a short-term algorithm adjustment. What’s more, the study found that the reverse-chronological feed had some salutary effects: It decreased the amount of “uncivil” content and slur words users encountered and increased content from moderate friends and sources featuring ideologically mixed audiences.

A separate study focused on ideological “echo chambers” found that reducing Facebook users’ exposure to “like-minded” sources didn’t affect polarization over the three-month period. But it did increase exposure to “cross-cutting” sources and reduce uncivil content. A third study removed “reshared” content of the sort that is thought to contribute to more sensational material “going viral.” A 90-day reprieve from reshares didn’t significantly affect users’ political attitudes, but it decreased content from political news sources deemed to be “untrustworthy” while it increased uncivil content.

A fourth study compared the enormous universe of political news that users could have seen on Facebook to two subsets: the narrower selection that the platform’s algorithm actually presented to them and the still-narrower category of content that users’ engaged with, meaning the material that they shared, commented on, and/or “liked.” This study, which did not attempt to measure changes in polarization, yielded troubling findings. It found a high degree of “ideological segregation in exposure to political news” on Facebook. And the platform’s algorithm made things worse by heightening the degree of polarized content people actually encountered. Users played a role, too, exacerbating segregation by means of their choices of which content to engage with.   

Meta’s Spin on the Research Results

If all of this seems difficult to summarize, that’s because it is. But Meta’s claim of exoneration seems wrong no matter how it’s considered. In fact, the company itself has implicitly acknowledged that it plays a role in stoking partisanship. In May 2020, the company posted an article on its corporate blog entitled, “Investments to Fight Polarization.” Written by Guy Rosen, now chief information security officer and then vice president for integrity, the post pointed to “some of the initiatives we’ve made over the past three years to address factors that can contribute to polarization.” The initiatives included hiring more moderators to remove incendiary content, combating hate speech more aggressively, and adjusting users’ News Feeds to prioritize posts by friends and family over those of news publishers.  

Why were such initiatives necessary? Not because the company was concerned about hypotheticals: It already had amassed substantial internal research, both qualitative and quantitative, on its role in stoking division across the world. This research came to light in the trove of internal documents disclosed in 2021 by whistleblower Frances Haugen, a former product manager at the company who photographed thousands of pages of material. 

The academic researchers who have spent more than three years working with counterparts at Meta were restrained in expressing their discomfort with the company’s spin on their four studies published last week. The Wall Street Journal’s Jeff Horwitz, who helped lead the reporting team that published the Haugen leaks, asked two of the lead academic researchers about Clegg’s take on their work:

The leaders of the academics, New York University professor Joshua Tucker and University of Texas at Austin professor Talia Stroud, said that while the studies demonstrated that the simple algorithm tweaks didn’t make test subjects less polarized, the papers contained caveats and potential explanations for why such limited alterations conducted in the final months of the 2020 election wouldn’t have changed users’ overall outlook on politics. 

“The conclusions of these papers don’t support all of those statements [by Meta],” said Stroud. Clegg’s comment is “not the statement we would make.”

Clegg’s attempt at spin reveals Meta’s heavy hand – not in meddling with the research itself, but in setting the overall terms and interpreting the results for journalists and the public. Michael Wagner, a professor in the University of Wisconsin-Madison’s School of Journalism and Mass Communication who served as the rapporteur to observe and chronicle the collaboration between Meta and the researchers, penned a retrospective on the effort in Science that was published alongside the research, and spoke to a Science correspondent about it. While he concluded that the research itself is “a net good,” he described the ways in which the company pulled the strings:

Meta set the agenda in ways that affected the overall independence of the researchers. Meta collaborated over workflow choices with the outside academics, but framed these choices in ways that drove which studies you are reading about in this issue of Science. Moreover, the collaboration has taken several years and countless hours of time—limiting the ability of the outside academics to pursue other research projects that may have shaped important public and policy conversations.

Other mainstream media outlets appeared to be less on guard against Meta’s spin than The Wall Street Journal. For example: 

  • Politico demonstrated the danger of headline overstatement when it went with “New studies: Facebook doesn’t make people more partisan.” While no headline could fully capture the four Meta studies,  this one must have made the company’s public relations people particularly happy. 
  • The Washington Post embraced Meta’s dubious view: The findings,” it reported, “are likely to bolster social media companies’ long-standing arguments that algorithms are not the cause of political polarization and upheaval. Meta has said that political polarization and support for civic institutions started declining long before the rise of social media.”
  • The New York Times did a good job of summarizing the research itself but quoted Katie Harbath, a former public policy director at Meta during the 2020 election, whose comments also pummeled Clegg’s straw man. “The studies upended the ‘assumed impacts of social media,’” Harbath told the Times. “People’s political preferences are influenced by many factors, she said, and ‘social media alone is not to blame for all our woes.’”

Assessing the Studies Fairly 

The media’s mixed – and in some instances muffed – interpretations of the initial four studies cast doubt not on the particulars of the research, but on its impact on the public’s broader understanding of these matters, given Meta’s strong influence. With its framing of the results, Meta seems to have a few key audiences in mind. Not only does the company seek to influence journalists and the general public, but top management likely also wants to calm Meta’s own staff, whom it has previously tried to reassure about the company’s contribution to polarization. Still, the primary audience is most likely policymakers, whom Meta is actively courting in the United States and around the world. 

In Europe, policymakers are beginning to implement the Digital Services Act (DSA), the EU’s ambitious attempt to regulate technology platforms. Large online platforms are required to assess “systemic risks” they may pose to society, including negative effects on civic discourse and elections. Likewise, in the U.K., the proposed Online Safety Bill contains various requirements around “risk assessment.” Influencing the bounds for such assessments is likely a goal of Meta’s lobbying efforts.

Another issue in play is how to mandate that independent researchers obtain  access to platform data to study systemic risks. The EU must still lay out the specific mechanisms for independent researchers to access platform data, as required by Article 40 of the DSA. And in the United States, proposed legislation – including the Platform Accountability and Transparency Act (PATA) put forward in the Senate and the Digital Services Oversight and Safety Act (DSOSA) put forward in the House – would create a similar mandate. Meta may be keen to show that its model in this collaboration can inform such rules. 

While it is far too soon to close the book on the outcome of this collaboration, it does seem clear that such mandates must be designed to avoid the types of problems identified by Wagner and to enable what he calls “opportunities for path-breaking, comprehensive scholarship that does not require a social media platform’s permission.” 

Until such time as these mandates are in place, it will be impossible to robustly assess the long-term impact of social media on phenomena such as polarization. But as the promised remaining dozen papers emerge from this collaboration, it is clear that journalists and policymakers would do well to read them closely, and to view Meta’s spin with skepticism.

IMAGE: The Meta logo is displayed during the Viva Technology conference at Parc des Expositions Porte de Versailles on June 14, 2023, in Paris, France. (Photo by Chesnot via Getty Images)