close
close

first Drop

Com TW NOw News 2024

Claude AI will process secret government data through a new Palantir deal
news

Claude AI will process secret government data through a new Palantir deal

An ethical minefield

Since its founders launched Anthropic in 2021, the company has marketed itself as taking an ethical and safety-oriented approach to AI development. The company differentiates itself from competitors like OpenAI by adopting what it calls responsible development practices and self-imposed ethical restrictions on its models, such as its “Constitutional AI” system.

As Futurism notes, this new defense partnership appears to run counter to Anthropic’s public “good guy” persona, and pro-AI experts on social media are taking notice. Frequent AI commentator Nabeel S. Qureshi wrote about X: “Imagine telling Anthropic’s security-focused, effective altruistic founders in 2021 that just three years after the company’s founding, they would be signing partnerships to deploy their ~AGI model directly to the military front lines.

Anthropic's "Constitutional AI" logo.

Anthropic’s “Constitutional AI” logo.

Credit: Anthropic / Benj Edwards

Anthropic’s “Constitutional AI” logo.


Credit: Anthropic / Benj Edwards

Aside from the implications of collaboration with defense and intelligence agencies, the deal ties Anthropic to Palantir, a controversial company that recently won a $480 million contract to develop an AI-powered target identification system called Maven Smart System for the US military . Project Maven has led to criticism within the technology sector about military applications of AI technology.

It’s worth noting that Anthropic’s terms of service outline specific rules and restrictions for government use. These terms allow activities such as foreign intelligence analysis and identifying covert influence campaigns, while prohibiting applications such as disinformation, weapons development, censorship and domestic surveillance. Government agencies that regularly communicate with Anthropic about their use of Claude may receive broader permission to use the AI ​​models.

Even if Claude is never used to target a human or as part of a weapons system, other problems remain. Although the Claude models are highly regarded in the AI ​​community, they (like all LLMs) have a tendency to confabulate, potentially creating incorrect information in a way that is difficult to detect.

That’s a huge potential problem that could affect Claude’s effectiveness with secret government data, and that fact, along with the other associations, worries Futurism’s Victor Tangermann. As he puts it: “It’s a troubling partnership that perpetuates the AI ​​industry’s growing ties to the US military-industrial complex, a worrying trend that should set off alarm bells given the technology’s many inherent flaws – and even more so when lives could be at stake.”