close
close

first Drop

Com TW NOw News 2024

Claude enlists the help of the American defense and intelligence services. AI efforts • The Register
news

Claude enlists the help of the American defense and intelligence services. AI efforts • The Register

Palantir has announced a partnership with Anthropic and Amazon Web Services to build a cloudy Claude platform suitable for the US government’s most secure defense and intelligence applications.

In an announcement today, the three companies said the partnership would integrate Claude 3 and 3.5 with Palantir’s Artificial Intelligence Platform, hosted on AWS. Both Palantir and AWS have received Impact Level 6 (IL6) certification from the Department of Defense, which allows the processing and storage of classified data up to a classified level.

Claude was first made available to the defense and intelligence communities in early October, an Anthropic spokesperson said The registry. The U.S. government will use Claude to reduce data processing times, identify patterns and trends, streamline document reviews and help officials “make informed decisions in time-sensitive situations while maintaining their decision-making authority,” according to the press release.

“Palantir is proud to be the first industry partner to bring Claude models to classified environments,” said Shyam Sankar, CTO of Palantir.

“Our partnership with Anthropic and AWS provides America’s defense and intelligence communities with the toolchain they need to securely leverage and deploy AI models, delivering next-generation decision advantage for their most critical missions.”

Acceptable use cutouts

It is interesting to compare the AI ​​usage policy of Meta, which announced yesterday that it is opening up its Llama neural networks to the US government for defense and national security applications, with that of Anthropic.

Meta’s usage policy specifically prohibits the use of Lama for military, warfare, espionage, and other critical applications, for which Meta has granted some exceptions for the Feds.

In our opinion, Anthropic’s acceptable use policy does not include such clear limitations. Even high-risk use cases, which Anthropic defines as uses of Claude that “involve an increased risk of harm” and require additional security measures, leave out defense and intelligence applications and list only legal, healthcare, insurance, finance, employment, etc. housing, academia, and Claude’s media use as “domains vital to public welfare and social equality.”

Instead, Anthropic’s AUP lists several specific ways in which the model cannot be used to cause harm, directly or indirectly, which would include at least some military work. Meanwhile, we expected a blanket ban, a la Meta, on military use, which would require exceptions to accommodate the Palantir-Amazon deal.

When asked about its AUP and how that might relate to government applications, especially defense and intelligence, as indicated in today’s announcement, Anthropic only referred us to a June blog post about its plans to expand government access to Claude to expand.

“Anthropic’s mission is to build reliable, interpretable, and controllable AI systems,” the blog said. “We are eager to make these tools available to a wide range of government users.”

Anthropic’s post notes that it has developed a method to grant exceptions to acceptable use policies to government users, noting that these rights are “carefully tailored to enable beneficial use by carefully selected government agencies.” We are not told what exceptions are allowed, and Anthropic did not directly answer our questions about that.

The existing “carve-out” structure, Anthropic noted, “allows Claude to be used for legally authorized analysis of foreign intelligence… and to provide advance warning of potential military activities, opening a door for diplomacy to address these to prevent or deter.’

“All other restrictions in our general usage policy, including those related to disinformation campaigns, the design or use of weapons, censorship, and malicious cyber operations, remain in place,” the AI ​​house said.

One could argue that Anthropic’s AUP already covers the most dangerous and critical uses of Claude by the defense and intelligence communities, and thus does not require a blanket ban on use by such government agencies, such as Meta, in its policy. In other words, all possible individual angles are prohibited without exception, while Meta’s broader approach seems more efficient.

For example, Anthropic’s policies include a ban on using Claude to interfere with the operation of military facilities and a ban on “battlefield management applications” and the use of Claude to “interfere with the exchange of illegal or highly regulated weapons or goods facilitate”.

Ultimately, we’ll just have to hope that no one decides to emotionally blackmail Claude into breaking Anthropic’s rules that the US government has yet to follow. ®

Editor’s note: This article was updated on November 8 to expand our comments on the acceptable use policy.