Anthropic says won't give US military unconditional AI use
AI company Anthropic said Thursday it would not give the US Defense Department unrestricted use of its technology despite being pressured to comply by the Pentagon.
"These threats do not change our position: we cannot in good conscience accede to their request," Anthropic chief executive Dario Amodei said in a statement.
Washington had given the artificial intelligence startup until Friday to agree to unconditional military use of its technology, even if it violates ethical standards at the company, or face being forced to comply under emergency federal powers.
Amodei said Anthropic models have been deployed by the Pentagon and intelligence agencies to defend the country but that it draws an ethical line regarding its use for mass surveillance of US citizens and fully-autonomous weapons.
"Using these systems for mass domestic surveillance is incompatible with democratic values," Amodei said.
And leading AI systems are not yet reliable to be trusted to power deadly weapons without a human in ultimate control, he added.
"We will not knowingly provide a product that puts America's warfighters and civilians at risk."
After meeting with Anthropic early this week, the Pentagon delivered a stark ultimatum: agree to unrestricted military use of its technology by 5:01 pm (22:01 GMT) Friday or face being forced to comply under the Defense Production Act.
The Cold War-era law, last used during the Covid pandemic, grants the federal government sweeping powers to compel private industry to prioritize national security needs.
The Pentagon also threatened to label Anthropic a supply chain risk, a designation usually reserved for firms from adversary countries that could severely damage the company's ability to work with the US government and reputation.
A senior Pentagon official at the time pushed back on the company's concerns, insisting the Defense Department had always operated within the law.
"Legality is the Pentagon's responsibility as the end user," the official said, adding that the department "has only given out lawful orders."
Officials also confirmed that an exchange regarding intercontinental ballistic missiles had taken place between Anthropic and the Pentagon, underscoring the sensitivity of the applications at the heart of the dispute.
The Pentagon confirmed that Elon Musk's Grok system had been cleared for use in a classified setting, while other contracted companies -- OpenAI and Google -- were described as close to similar clearances, piling competitive pressure on Anthropic to fall in line.
Anthropic was contracted alongside those companies last year to supply AI models for a range of military applications under a $200 million agreement.
Former OpenAI employees founded Anthropic in 2021 on the premise that AI development should prioritize safety -- a philosophy that now puts it on a collision course with the Pentagon and the White House.
"Anthropic understands that the Department of War, not private companies, makes military decisions," Amodei said.
"However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values."
L.Costa--GdR