How generative AI poses a threat to democracy

Disinformation created using generative AI technology has become a significant threat to our elections. Massachusetts’ leaders must act.

Our democracy can only thrive when voters have access to accurate information. But Deepfakes produced with artificial intelligence (AI) are being weaponized to spread disinformation and suppress votes. Take for example the deepfake video of Ron DeSantis declaring he was dropping out of the 2024 presidential race, or the GOP deepfake video depicting what America would look like in the future if President Biden gets reelected. 

A new Massachusetts law would regulate the use of artificial intelligence in political advertising, increasing transparency and accountability. 

What Are Deepfakes?

Deepfakes are digitally altered video, audio, or images that can be used to mislead voters. This content shows events or statements that did not actually occur. With this AI technology, you can literally put words in other people’s mouths and expressions on their faces. It’s troubling. 

In 2018, filmmaker Jordan Peele produced a short deepfake demonstrating the dangers of deepfakes:

Unfortunately, in the six years since that video was released, deepfakes have only become cheaper and easier to produce — and troublingly, far more convincing. 

Why Do We Need to Act Now?

AI technology is progressing quickly, and it is becoming more difficult to distinguish deepfakes from reality. A video that might have taken a large budget and full production team to create a few years ago can now be put together by everyday users with just a few clicks. 

Deepfakes have already come into play for the 2024 Presidential election. During New Hampshire’s primary, voters received a robocall impersonating President Joe Biden that told recipients not to vote in the presidential primary. 

Why Aren’t Political Deepfakes Illegal Yet?

AI-generated content blurs the lines between fraud and free speech. On social media, people are free to express their ideas and views within the parameters of a platform’s policies. 

Under Section 230 of the 1996 Communications Decency Act, internet service providers are immune from liability for user content and may set their own standards for how they want to moderate and remove content. This makes users responsible for their own content, sparking debates over the balance between fostering online freedom and mitigating harmful content.

What Can Massachusetts State Lawmakers Do?

There’s proposed legislation pending before the Massachusetts General Court right now to stop deepfakes from spreading disinformation in our elections. The House of Representatives just passed legislation to regulate deepfakes in election-related materials and there’s also S.2730 which would:

  • Require disclosure of deepfakes in political advertisements published within 90 days of an election and
  • Give candidates who are victims of deep fakes the right to sue the publisher.

How Can I Help?

If you live in Massachusetts, contact your state lawmakers and urge them to pass S.2730 to protect the future of our democracy. 

No matter where you live, you can talk to your friends and family about deepfakes and encourage them to check the accuracy of the information they see online. You can also report disinformation at