
Dario Amodei faces Pentagon pressure over AI military use
Dario Amodei faces Pentagon pressure over AI military use
- The Department of Defense has threatened to declare Anthropic a supply chain risk over the use of their AI model, Claude.
- Dario Amodei has maintained that Anthropic will not allow its AI technology to be used for mass surveillance or lethal applications.
- The escalating tensions illustrate the broader conflicts regarding ethical limits on AI use in military operations.
Story
In recent weeks, tensions have soared between Anthropic, led by CEO Dario Amodei, and the U.S. Department of Defense (DOD) regarding the use of the AI model Claude for military operations. Following a controversial operation in Venezuela where Claude was allegedly employed, the DOD has expressed dissatisfaction with Anthropic's limits on how its technology can be utilized in warfare. As a result, Defense Secretary Pete Hegseth has summoned Amodei to the Pentagon, warning that failing to comply with military requests could jeopardize Anthropic's standing as a contractor. The company has established strict guidelines against using its AI for lethal applications and mass surveillance. These discussions have unfolded against a backdrop of broader governmental scrutiny on the roles AI technologies should play in national security and military operations. Since signing a $200 million contract last summer, Anthropic has navigated a complex relationship with military authorities, aiming to uphold ethical standards while attempting to expand its role within the defense sector. Recent communications have indicated that the DOD could label Anthropic a 'supply chain risk' if negotiations continue to be contentious, mirroring previous government actions toward companies deemed threats to national security. This ongoing dispute reflects not only the stakes for Anthropic's business operations but also a critical conversation about ethical boundaries in the application of AI by governmental entities.