close
close

first Drop

Com TW NOw News 2024

Slack fixes AI bug that exposed private channels
news

Slack fixes AI bug that exposed private channels

from Salesforce Slack Technologies has patched a flaw in Slack AI that could allow attackers to steal credentials from private Slack channels or perform secondary phishing within the collaboration platform by manipulating the large language model (LLM) on which it is based.

Researchers from security firm PromptArmor discovered a prompt injection flaw in the AI-based feature of the popular Slack workforce collaboration platform that adds generative AI capabilities. The feature allows users to query Slack messages using natural language; the issue exists because the LLM may not recognize that a statement is malicious and treat it as legitimate, according to a blog post reveal the defect.

“Prompt injection occurs because an LLM cannot distinguish between the ‘system prompt’ created by a developer and the rest of the context added to the query,” the PromptArmor team wrote in the post. “So if Slack AI injects a directive via a message, and that directive is malicious, then there is a high probability that Slack AI will follow that directive instead of, or in addition to, the user query.”

The researchers described two scenarios in which this issue could be maliciously exploited by malicious actors: one scenario in which an attacker with an account in a Slack workspace could steal data or files from a private Slack channel in that space, and another scenario in which an actor could phish users in the workspace.

Because Slack is widely used by organizations for collaboration and therefore often contains messages and files that reference sensitive corporate data and secrets, the leak significant exposure to dataaccording to the research team.

Broadening the attack surface

The problem is exacerbated by a change made to Slack AI on August 14 that extends not only messages, but also uploaded documents and Google Drive files to the system. “This broadens the risk footprint,” the PromptArmor team said, as these documents or files could be used as a medium for malicious instructions.

“The problem here is that the attack surface is fundamentally becoming extremely broad,” the post reads. “Now an attacker no longer has to post a malicious instruction in a Slack message, they may not even have to be in Slack.”

PromptArmor disclosed the flaw to Slack on August 14 and worked with the company for about a week to clarify the issue. According to PromptArmor, Slack eventually responded that the issue disclosed by researchers was “intended behavior.” The researchers noted that the Slack team “demonstrated a commitment to security and attempted to understand the issue.”

A short blog post Messages posted by Slack this week appeared to reflect a change of heart about the breach: The company said it had deployed a patch to address a scenario in which “under very limited and specific circumstances” a threat actor with an existing account in the same Slack workspace “could phish users for certain data.” The message did not mention the data exfiltration issue, but noted that there is currently no evidence of unauthorized access to customer data.

Two evil scenarios

In Slack, user queries pull data from both public and private channels, which the platform also pulls from public channels that the user isn’t a member of. This potentially exposes API keys or other sensitive data that a developer or user posts in a private channel to malicious exfiltration and misuse, PromptArmor said.

In this scenario, an attacker must go through a series of steps to post malicious instructions into a public channel that the AI ​​system believes to be legitimate, for example, requesting an API that a developer they place them in a private channel that only they can see, which ultimately leads to the system executing the malicious instructions to steal that sensitive data.

The second attack scenario follows a similar sequence of steps and involves malicious prompts. However, instead of exfiltrating data, Slack AI can send a phishing link to a user requesting a re-login. A malicious user can then hijack their Slack credentials.

How safe are AI tools?

The shortcoming raises the question of the security of current AI tools, which undoubtedly contribute to workforce productivity, but offer too many ways so that attackers can manipulate them for malicious purposes, said Akhil Mittal, senior manager of cybersecurity strategy and solutions for Synopsys Software Integrity Group.

“This vulnerability shows how a flaw in the system can expose data that is not supposed to be exposed to unauthorized users,” he says. “It really makes me question how secure our AI tools are. It’s not just about fixing the issues, it’s about making sure that these tools are managing our data properly.”

There are indeed numerous scenarios where attackers poison AI models with malicious code or data has already surfaced, which reinforces Mittal’s point. As these tools become more widely used in enterprise organizations, it becomes increasingly important for them to “keep both security and ethics in mind to protect our information and maintain trust,” he says.

One way organizations using Slack can do this is by using Slack AI settings to limit the feature’s ability to process documents, preventing potential attackers from accessing sensitive data, PromptArmor advises.