This failure is inherently tied to platforms’ business models and practices that incentivize the proliferation of harmful speech. Content that generates the most engagement on social media tends to be disinformation, hate speech and conspiracy theories. Platforms have implemented business models designed to maximize user engagement and prioritize their profit over combating harmful content.
While the First Amendment limits our government from regulating speech, there are legislative and regulatory tools that can rein in social media business practices bad actors exploit to amplify speech that interferes with our democracy.
The core component of social media platforms’ business model is to collect as much user data as possible, including age, gender, location, income and political beliefs. Platforms then share data with advertisers for targeted advertising. It should come as no surprise that disinformation agents exploit these capabilities to micro-target harmful content, particularly to marginalized communities.
For example, the Trump campaign used Facebook to target millions of Black voters with deceptive information to deter them from voting.
Comprehensive privacy legislation, if passed, could require data minimization standards, which limit the collection of personal data to what is necessary to provide service to the user. Legislation could also restrict the use of personal data to engage in discriminatory practices that spread harmful content such as online voter suppression. Without the vast troves of data platforms collect on their users, bad actors will face more obstacles targeting users with disinformation.
Platforms also use algorithms that determine what content users see. Algorithms track user-preferences, and platforms optimize their algorithms to maximize user engagement, which can mean leading users down a rabbit hole of hate speech, disinformation and conspiracy theories. Algorithms can also amplify disinformation as conspiracy theorists used the “stop the steal” moniker across social media platforms to organize offline violence.
Unfortunately platform algorithms are a “black box” with little known about their inner workings. Congress should pass legislation that holds platform algorithms accountable. Platforms should be required to disclose how their algorithms process personal data. Algorithms should also be subject to third-party audits to mitigate the dangers of algorithmic decisionmaking that spreads and amplifies harmful content.
Federal agencies with enforcement could apply their authority to limit the spread of harmful online speech that results from platform business practices. For example, the Federal Trade Commission can use its enforcement power against unfair and deceptive practices to investigate platforms for running ads with election disinformation. The Federal Election Commission could require greater disclosure of online political advertisements to provide greater transparency as to what entities are trying to influence our elections.
Outside of legislative and regulatory processes, the Biden administration should create a task force for the internet, consisting of representatives from federal, state and local governments, business, labor, public interest organizations, academia and journalists. The task force would identify tools to combat harmful speech online and make for an internet that would better serve the public interest.
Federal lawmakers must also provide greater support for local journalism to meet the information needs of communities.
Social media companies have proved that profits are more important to them than the safety and security of our democracy.
Yosef Getachew is director of the Media and Democracy Program for Common Cause.