liham

Pete Hegseth vs. Anthropic: Read Our Letter On AI Surveillance

The Constitution doesn’t bend just because technology evolves. AI makes it easier than ever for the government to surveil Americans and automate the use of force. That’s why Congress must step in to ensure these tools are used lawfully and that this administration commits to respecting our constitutional rights.

Common Cause and partners are urging Congress to use its oversight authority to investigate the Department of Defense’s (DOD) overreach in issuing an ultimatum to AI company Anthropic.

Secretary Hegseth is demanding Anthropic remove restrictions on using its AI models for mass domestic surveillance and to power fully autonomous weapons — two red lines that Anthropic has drawn a line in the sand against.

The full text of our coalition letter is below:


Dear Chairmen, Ranking Members, and Members of the Committees:

On behalf of Common Cause, The Alliance for Secure AI, and Young Americans for Liberty, and our members and supporters nationwide, we write to urge your committees to examine the Pentagon’s current procurement dispute with Anthropic for what it actually represents: whether the Department of Defense can expressly reserve the right to violate the law and the constitutional rights of Americans.

The dispute is seemingly narrow. The Department of Defense and Anthropic, an AI company, are in a public fight over a $200 million contract. At issue are two red lines Anthropic has drawn in its standard usage policy: that its model will not be used for mass domestic surveillance, and that it will not be used to power fully autonomous weapons — systems that fire, target, or kill without a human in the decision loop. Secretary Hegseth is pressuring the company to drop these boundaries and comply with his new policy to use AI models for “all lawful purposes,” in line with his January memo seeking to be “free from usage policy constraints that may limit lawful military applications.”

Secretary Hegseth has given Anthropic an ultimatum to comply with his new terms by February 28 or “face consequences.” Those consequences include designation as a “supply chain risk,” a label reserved for foreign adversaries, or being forced to tailor its model through the Defense Production Act, a law designed for national emergencies.

In doing so, Secretary Hegseth is implying that Anthropic’s red lines are inconsistent with his interpretation of the law. The real question is: why won’t he commit to not use AI for mass surveillance and fully autonomous weapons?

The decision to take a human life is the most consequential act a government can perform. The Constitution does not leave that decision to executive discretion alone. The laws of war and decades of military doctrine impose accountability at the moment of lethal decision precisely because no government, however well-intentioned, can be trusted to police that boundary itself. Department of Defense Directive 3000.09 has long required “meaningful human control” over the use of lethal force. Anthropic is not inventing a new standard. It is asking the Pentagon to honor one it is already required to follow, and the American people have a right to expect their elected representatives to ask why it won’t.

The surveillance question rests on the same foundation. The Fourth Amendment’s protections against unreasonable search apply regardless of the technology used. Surveillance that once required enormous resources can, with advanced AI, happen automatically, continuously, and at scales that should alarm us all. If existing law needs to adapt to account for new technology, that is Congress’s job, not a decision to be made in a contract negotiation.

The Pentagon has not limited this pressure to Anthropic. OpenAI, Google, and xAI were each awarded contracts after agreeing to lift their standard safeguards for the military’s unclassified systems. This week, xAI formally agreed to the Pentagon’s “all lawful purposes” standard to deploy its Grok model in classified military systems with no conditions attached.

The Pentagon has been explicit: this is not just about Anthropic. This dispute is designed to “set the tone” for every AI company negotiating with the military. The message has been received. Every other frontier AI company has already complied. Anthropic is now the only holdout, and the Pentagon has given it until Friday to fall in line. The example has been made.

The practical stakes are significant. A “supply chain risk” designation would force every defense contractor to certify it has no connection to Anthropic, whose technology is embedded across eight of the ten largest American companies. Dean Ball, a former Trump AI adviser who helped shape the administration’s AI Action Plan, said it was “hard to think of a more strategically unwise move for the U.S. military to make.”

What is being decided here is not which vendor the Pentagon prefers. It is whether the federal government can use frontier AI to conduct mass surveillance and apply lethal force in violation of what existing law and the Constitution allow. The answer to this question must be a resounding no.

These issues demand Congressional oversight. We respectfully request that the Committees take the following actions:

  1. Summon Secretary Hegseth and senior officials to testify about the Department’s requirements of AI companies under “all lawful purposes,” at both unclassified and classified levels, with particular focus on domestic surveillance capabilities and autonomous weapons development.
  2. Request documents and communications from the Department of Defense and from Anthropic, OpenAI, Google, and xAI related to AI use for domestic surveillance and autonomous weapons. This should include: negotiating terms and usage policy agreements with AI contractors; internal assessments of the capabilities being requested; and any legal analysis supporting the “all lawful purposes” standard.
  3. Establish a reporting requirement directing the Department to report to Congress, on a recurring basis, the AI capabilities deployed in classified systems, the usage policies governing those deployments, and the mechanisms in place to ensure compliance with the Fourth Amendment and DoD Directive 3000.09. Congress cannot exercise oversight over what it cannot see or fully understand.

The issues raised by this dispute are not simple vendor negotiations. They are constitutional and legal issues that belong to the American people and their elected representatives. The American people should not have to rely on a private company to be the last line of defense for their constitutional rights and the rule of law. That is Congress’s job.

We urge the Committees to act accordingly.

Taos-puso,

Brendan Steinhauser
CEO
The Alliance for Secure AI

Carol Evans
Vice President, Policy
Karaniwang Dahilan

Sean Themea
Chief Operating Officer
Young Americans for Liberty

liham

Pete Hegseth vs. Anthropic: Read Our Letter On AI Surveillance

The Constitution doesn’t bend just because technology evolves. AI makes it easier than ever for the government to surveil Americans and automate the use of force. That’s why Congress must step in to ensure these tools are used lawfully and that this administration commits to respecting our constitutional rights.

California liham

Our Response to OpenAI’s Proposed Ballot Measure

CITED, our AI Watchdog group, is sounding the alarm on OpenAI’s proposed ballot measure and the risks it poses to children’s safety.

Patnubay

Pakikipag-usap sa Mga Mapagkukunan ng Pagsasanay sa Mga Kaibigan at Pamilya

Ang Common Cause ay nagbibigay ng kapangyarihan sa mga aktibista at tagapagturo na pamunuan ang kanilang komunidad sa paglaban para sa digital na demokrasya - o pag-access sa impormasyon.

I-access at i-download ang aming mga materyales sa pagsasanay para sa pagkakaroon ng epekto at produktibong pag-uusap tungkol sa kaalaman sa impormasyon.

liham

Meta Civil Rights Advisory Group Sulat Para kay Mark Zuckerberg Tungkol sa Mga Bagong Pagbabago sa Patakaran

Hinihimok ng mga pinuno ng karapatang sibil ang Meta na muling isaalang-alang ang mga kamakailang pagbabago sa pagmo-moderate ng nilalaman, nagbabala na pinapagana nila ang mapaminsalang nilalaman at patahimikin ang mga marginalized na boses. Ang bukas na liham ay nananawagan para sa mga patakaran na nagpoprotekta sa malayang pagpapahayag at nagpapaunlad ng pagiging inclusivity para sa lahat ng mga user.

Isara

Isara

Hello! Mukhang sasali ka sa amin mula sa {state}.

Gusto mong makita kung ano ang nangyayari sa iyong estado?

Pumunta sa Common Cause {state}