The 2016 election was a warning, and 2020 made it clear: The monetization of personal data poses a direct threat to civil rights and democracy. The good news: the Biden-Harris administration has an opportunity to check this destructive trend, and here are three simple steps they can take along that path.
First, some background.
Digital platforms undermine civil rights. Comprehensive audits have documented a truth that many people already suspected: Social media platforms like Facebook, and user-service platforms like AirBnB, operate in ways that reinforce structural racism. In some cases, the problem stems from the detailed personal profiles and revenue models that allow employers, lenders, and others to target their advertising based on a user’s gender, age, or race or ethnicity. In other cases, the platforms’ use of personal profiles and user-directed models create opportunities for individuals to discriminate in buying or selling services – a model that led to Black homeowners getting lower prices for their AirBnB rentals and Black renters finding their AirBnB bookings rejected. Although these and other platforms have faced multiple lawsuits and made some changes, the problems of discrimination in targeted advertising and person-to-person sales remain.
The inequities that arise at the intersection of race and data aren’t limited to big platform providers; the algorithms that are increasingly used to predict behavior, set pricing, and open the door to opportunities are routinely relying on detailed personal profiles, with the result that those algorithms are building in some of society’s most pernicious biases related to race and gender. To take just a few examples: Facial recognition algorithms are routinely found to be less accurate in identifying non-White, non-male faces. Amazon, a company with some of the world’s most advanced algorithmic design, had to stop using a job applicant screening algorithm it developed because the machine learning program consistently prioritized male applicants over female job candidates. A correctional system algorithm used by parole boards in many jurisdictions has been less effective than human parole board review in predicting recidivism rates and fraught with errors that correlated with racial bias – and yet many jurisdictions continue to use algorithms in making decisions about parole. (More recent research suggests the algorithmic accuracy may be improving.) Even everyday ride-sharing apps have been shown to apply differential pricing based on race. The growing body of evidence relating to race, gender, and other biases in algorithms raise the specter of government and private sector entities being seduced by the appeal of scientific-seeming results, without understanding or addressing the ways in which biased data sets, opaque machine learning processes, and other limitations can cause algorithms that are based on personal data to reinforce, rather than rise above, society’s most entrenched imbalances.
Digital platforms undermine democracy. The 2016 U.S. presidential election made clear the power of social media to spread propaganda and disinformation. It’s no wonder: It’s a social media dictum that “content that enrages, engages,” and Facebook’s own former executives have testified that they designed the platform to be addictive – to increase profits by maximizing user engagement. The addictive model has led foreign adversary governments to create “inauthentic” – or fake – accounts on platforms like Facebook, Twitter, and Instagram, where trolls operating in St. Petersburg, Russia sought to reignite tensions in Baltimore over the death of Freddie Gray, and Iranian government agents have sought to influence the 2020 U.S. elections.
The problems aren’t limited to foreign adversaries or to electoral politics. Facebook’s internal research concluded that 64 percent of people who joined extremist groups on the platform did so because Facebook’s algorithms recommended the content. The result: Online extremism is spilling off the screen and into the streets, as QAnon believers get elected to office and “militia” groups use Facebook to plot the kidnapping of a state governor. The spread of hate speech has become so endemic that it led to a Facebook advertising boycott over the summer, but so far consumer pressure has been insufficient to bring about lasting change.
Digital platforms reinforce structural power imbalances in society, to the detriment of the less powerful. With today’s technologies, Amazon can carry out continuous surveillance of its warehouse workers, measuring their steps, their proximity to packages, and how long they stay in the bathroom on break. Companies demand that workers carry smart phones at all times, often with apps that include precision geolocation, allowing the employer to track their workers’ locations at all times, even after-duty hours. The increased remote work and remote learning during the pandemic has expanded opportunities for private sector surveillance, with workplaces and schools demanding real-time video feeds of on-duty staff at remote locations and students taking tests at home.
Although these problems area real, data-driven technologies have also brought consumer convenient services, new opportunities, and entertaining pastimes. In order to preserve what’s best in tech, while curbing its abuses, what can the new administration do?
Three Steps to Tackle Big Tech
First, expand legal protections for the right to privacy by recognizing the harm that individuals can suffer from the collection, aggregation, and analysis of information about them. Traditionally, U.S. law has focused on information that can be monetized through steps like protecting Social Security numbers and credit card information from unauthorized disclosure. Many people, however, would say that they’re at least – or perhaps more – concerned about unauthorized access to their photos, location, internet searches, and other data that is used to build personal data profiles. The new administration should task the Federal Trade Commission, the nation’s privacy and antitrust watchdog, with proposing new rules expanding the scope and definition of privacy harms. In addition, the White House should propose a comprehensive federal data privacy law that addresses these more nuanced issues of structural power imbalances and personal data aggregation.
Second, roll out a comprehensive public education and awareness campaign. There’s been a growing call for digital awareness and media literacy programs in K-12 education. While important, school programs alone aren’t enough to combat online disinformation, as studies show that older Americans are far more likely than younger ones to believe and share online “fake news.” The new administration should develop a comprehensive set of public awareness resources adapted toward different audiences, including educational materials that can be made available to local school districts at the K-12 level, and a widespread public service information campaign that includes traditional television spots as well as materials that can be used by local libraries, senior centers, community organizations, and more.
Third, take a whole of government approach to tackling private sector surveillance and algorithmic impacts. The Biden administration should develop within each Cabinet department a set of activities focused on data privacy, algorithmic impact, and their implications for civil rights and social justice. For example, the Department of Justice can promulgate guidance on use of algorithms in probation and parole decisions – an area in which studies have shown that algorithms are less accurate than human review in predicting recidivism and can have a disparate impact based on race. The Department of Labor can promulgate guidance for employers on workplace surveillance. The Department of Health and Human Services can issue guidelines on the use of algorithms to assess health needs, whether for patient care or provision of benefits. To be clear, the focus needn’t be on regulations for the federal government’s own use of algorithms – although that’s also important, many government activities are already more regulated than the private sector. Instead, this work could provide the standards that private sector entities and state and local governments must meet in order to be eligible to receive federal funding. They would serve a dual purpose: providing substantive guidance to entities that lack the resources to do the necessary research on their own, and harnessing the power of taxpayer funding to nudge the data economy in directions that better serve the individuals whose information underpins these digital technologies. In order to wrangle this unruly mess, all of these department-level efforts can be coordinated by and through a centralized lead, which could be housed in an organization such as the White House Office of Science and Technology Policy.
The incoming administration will face a host of urgent challenges, including the four priorities that the transition team has already identified: the pandemic, the economy, systemic racism, and climate change. But alongside these and other pressing concerns, this White House has an opportunity to lay the foundation for a society and an economy in which one of today’s most vital forms of currency – individuals’ personal data – is managed in a way that continues to foster technological innovation and business competition, but that better serves the American people.