Opinion
State leaders must actually hold social media, AI companies accountable
The verdicts of two recent landmark lawsuits — one in Los Angeles and another in New Mexico — confirm what millions of families have known for far too long: Social media companies have built a business model that is fundamentally exploitative. These tech giants hook users while they’re young to create lifelong consumers, no matter the cost to their health or the damage to their lives.
The scope of what juries have now confirmed is staggering: In New Mexico, the jury found Meta liable for misleading consumers about the safety of its platforms and endangering children and ordered the company pay $375 million in civil penalties. The civil trial centered on allegations that Meta violated state consumer protection laws, concealed what it knew about the dangers of child sexual exploitation on its platforms and misled residents about the safety of Facebook and Instagram.
Just a day later, in Los Angeles, a jury found Meta and YouTube negligent in their platform designs in the first bellwether personal injury trial — awarding damages and, critically, forcing executives to answer questions under oath about the harm their products cause. Whistleblowers and internal documents unearthed during trial revealed the full extent to which Big Tech knew what it was doing to young people, and kept doing it anyway.
This is just the beginning, and we could very well soon see an avalanche of more court rulings that could snowball into the Big Tobacco litigation of our time.
These verdicts have already achieved something historic: the internal documents Big Tech tried to hide are now public, exposing the lies and giving lawmakers even more momentum to act.
Now comes the critical question: What do policymakers do with this moment?
California has enacted some protections for people online, but far too many reform efforts have been blocked, watered down or stopped short of becoming law due to the outsized influence of tech industry lobbying. That must change, and the lessons of the social media era must not be forgotten as we confront the next threat of artificial intelligence.
Adam Raine, a 16-year-old living in Orange County, died by suicide in April 2025. His parents discovered more than 3,000 pages of chat logs showing that ChatGPT had spent months coaching him toward his death. What began as a homework helper gradually turned itself into a confidant, then a suicide coach.
Raine’s father said it plainly: “He would be here but for ChatGPT. I 100% believe that.”
Neither is his case isolated. It’s a preview of what happens when we allow a dangerous new technology to reach people — both children and adults — before the laws catch up.
California has taken steps on both social media and AI regulation, but the pattern remains troubling: Meaningful accountability measures get softened or vetoed while tech companies — many headquartered in our own backyard — escape real consequences.
These verdicts should serve as a clarion call to elected officials at every level of government. The time for half-measures and delays has passed.
A number of measures currently pending in the legislature would take meaningful steps toward better and safer online experience. Among these are Assembly Bill 2023, authored by Assemblymembers Rebecca Bauer-Kahan, D-San Ramon, and Buffy Wicks, D-Oakland, and Senate Bill 1119, authored by Sen. Steve Padilla, D-Chula Vista, which would mandate safety standards for kids using AI companion chatbots. Assembly Bill 2, authored by Assemblymembers Josh Lowenthal, D-Long Beach, and Joe Patterson, R-Rocklin, would hold large social media companies financially liable when they platforms are proven to harm kids. And Assembly Bill 1700, also authored by Lowenthal, would create a state-level e-Safety Commission solely dedicated to enforcing laws on youth online protections and can adapt laws as technology evolves.
The lesson of the social media era is that voluntary self-regulation does not work. Real reform means strong design standards, clear industry-wide rules and legal tools for families to seek justice when those standards are violated.
The same logic applies to AI: If a company sells a product it knows can coach someone toward suicide, it must face immediate consequences.
Gov. Gavin Newsom and the California Legislature must enact policies that hold social media companies legally accountable for the harm they cause their users and apply those same hard lessons to AI before another generation pays the price.
John Bennett is director of the California Initiative for Technology and Democracy (CITED), a project of California Common Cause.