After-Action Report: Online Disinformation on Election Day

We’ve learned from this year’s midterms that social media companies are still not up to the task of addressing election disinformation in a timely and consistent way.

Lies, disinformation, and dirty tricks in our elections are a problem as old as our democracy – fake phone calls, false billboards, or even inaccurate and misleading mail. But today, social media lets these lies spread like wildfire – causing harms that range from deceiving someone out of their right to vote to the deadly January 6th attack on the Capitol.

That’s why, every election cycle since 2016, Common Cause’s Stopping Cyber Suppression program vigilantly monitors social media for voting and election-related disinformation – that is, false information that could cause a voter to miss their opportunity to vote or undermine their faith in the results of the election. Once we identify disinformation, we alert social media platforms and ask them to take action on it: remove, label, add “friction” making it more difficult to share, or otherwise ensure it does not interfere with voters’ ability to participate in the elections. 

Then, we partner with a diverse coalition to combat these lies with accurate information from trusted sources – inoculating voters against the onslaught of disinformation narratives. At the same time, we use social media positively to answer voters’ questions, identify issues at polling places, and connect them to nonpartisan voter information from election officials.

But unfortunately, we’ve learned from this year’s midterms that social media companies are still not up to the task of addressing election disinformation in a timely and consistent way. That’s particularly disappointing given the two years spent planning after 2020 election disinformation spawned hundreds of anti-voter state bills as well as an insurrection at the Capitol causing multiple deaths.

We knew going into 2022 that disinformation would pose a serious challenge for voters, advocates, and election officials. That’s why in May and October of this year, we offered detailed recommendations to sites like Facebook and Twitter on what they could do to protect voters from election disinformation. We also alerted these platforms in November about the need to quickly respond to election disinformation, and asked for a clear process on how to escalate what we found to them.

As Election Day grew closer, we submitted numerous posts to these companies that openly violated their own civic integrity policies. We only prioritized the most dangerous content: posts that were or would likely go viral containing outright false information about elections. 

From Friday, November 4th through Wednesday, November 9th, we reported almost two dozen highly concerning pieces of content, including: 

  • 15 Tweets with viral reach, only one of which received a label and another of which resulted in a suspension; 
  • 7 TikToks, all found in violation and either removed or restricted from appearing in feeds; 
  • 3 Facebook posts, all of which were found to contain no violation, and;
  • 1 Instagram post which resulted in a label. 

Each of these posts contained threats of violence, potentially could lead to the exposure of identifying information of election workers, or contained overt falsehoods about the election process – all of which violated these platforms’ civic integrity policies. The lax response we got in return left us with significant cause for worry. 

Among the worst offenders was Facebook – which recently sparked controversy by announcing it would no longer fact-check Trump after he announced presidential bid. Despite parent company Meta’s stated intent to provide additional context on key issues and their published commitment to election integrity, when Common Cause reported examples of election disinformation (like viral unproven claims about voting machine issues) they found these posts to be perfectly acceptable. 

The afternoon before Election Day, we asked Meta to enforce their policies against a misleading Instagram video  – presenting footage from March 2022 as if it was happening contemporarily. Despite providing Meta with a detailed fact-check of the video, we were told that the video did not violate their policies. But the next day, once the Washington Post reported on the false video,, Meta informed us that the video would now be labeled with a “Missing Context” warning and link to fact-check articles. This is a positive result – but it shouldn’t take repeated follow-up from advocates and a major newspaper’s attention for Meta to apply their own policies as written.

Twitter, in open turmoil since its acquisition by Elon Musk, has also failed to respond in a timely manner when alerted to harmful disinformation spreading on their platform. Viral posts we escalated to Twitter spent hours “in review” – free to spread in the meantime – in an environment where harmful disinformation about election administration was already rapidly escalating in reach and engagement. 

TikTok has been far more responsive and quick to apply their policies, removing 7 videos that contained calls for violence, claims of election fraud, and attempts to target poll workers. However, recent research from the nonprofit media lab Accelerate Change has shown that TikTok videos containing neutral political messages focused on getting out the vote appear to be suppressed – receiving far less reach than similar videos without pro-voter content. 

The 2022 midterms made clear that platforms still have a long way to go when it comes to consistently and proactively applying their civic integrity policies – especially when it comes to election disinformation. The public statements and PR social media companies put out remain at odds with the real-life challenges voters have on the platform. 

We will continue to press platforms to implement stronger civic integrity policies. These policies, if enacted and enforced, would help stop the spread of harmful disinformation that impacts voters. They would also provide greater transparency to researchers and civil society advocates who are attempting to study the harms of disinformation. 

A thriving democracy requires an informed and engaged public. But increasingly, partisan bad actors have weaponized online disinformation to delegitimize our elections, intimidate voters, and fuel a surge of anti-voter legislation in states across the country. Our Stopping Cyber Suppression team is on the case, and we’ll continue exposing the spread of online disinformation so we can generate the public pressure required to make platforms like Facebook and Twitter enforce their own policies.

See More: Voting & Elections