close
close

first Drop

Com TW NOw News 2024

CORS is stupid – Kevin Cox
news

CORS is stupid – Kevin Cox

Posted

CORS and the browser’s same-origin policy are often misunderstood. I’m going to explain what they are and what you need to do to stop worrying about them.

First of all, CORS is a massive hack to mitigate legacy bugs. It offers both opt-out protection as an attempt to mitigate XSS attacks on unaware or unmodified sites, and opt-in protection for sites to actively protect themselves. But neither of these protections are actually sufficient to solve the intended problem. If your site uses cookies, must take action to be safe. (Okay, not each site, but you shouldn’t rely on it. Check your site carefully or follow these simple steps. Very reasonable patterns can expose you to XSS vulnerabilities.)

The main problem is how implicit credentials are handled on the web. In the past, browsers made the disastrous decision that these credentials could be included in cross-origin requests. This opened up the next attack vector.

  1. Log in to https://your-bank.example.
  2. Visit https://fun-games.example.
  3. https://fun-games.example runs fetch("https://your-bank.example/profile") to read sensitive information about you, such as your address and current balance.

This worked because when you logged into your bank, they gave you a cookie to access your account information. While fun-games.example can’t just steal that cookie, it can also make its own requests to your bank’s API and your browser would helpful add the cookie to authenticate You.

This is where CORS comes in. It describes a policy for how cross-origin requests can be made and used. It is both incredibly flexible and completely inadequate.

The default policy allows you to make requests, but you cannot read the results. fun-games.example is blocked from reading your address from https://your-bank.example/profileIt can also use side channels like latency and whether the request succeeded or failed to learn things.

But despite being incredibly annoying, this doesn’t really solve the problem! While fun-games.example can’t read the result, the request is still being sent. This means it can be executed POST https://your-bank.example/transfer?to=fungames&amount=1000000000 to transfer a billion dollars to their account.

This has to be one of the biggest security compromises ever made in the name of backwards compatibility. The TL;DR is that the automatically provided cross-origin protections are completely broken. Any site that uses cookies will have to explicitly handle this.

Yes, each individual site.

The primary defense against these cross-site attacks is to ensure that implicit credentials are not used inappropriately. It is best to start by ignoring all implicit credentials on cross-site requests, then you can add specific exceptions as needed.

The best solution is to set up server-wide middleware that ignores implicit credentials on all cross-origin requests. This example removes cookies, if you are using HTTP Authentication or TLS client certificates make sure to ignore those as well. Fortunately, the Sec-Fetch-* headers are now available on all modern browsers. This makes cross-site requests easy to identify.

def no_cross_origin_cookies(req):
	if req.headers("sec-fetch-site") == "same-origin":
		# Same origin, OK
		return

	if req.headers("sec-fetch-mode") == "navigate" and req.method == "GET":
		# GET requests shouldn't mutate state so this is safe.
		return

	req.headers.delete("cookie")

This provides a secure baseline. If needed, you can add specific exceptions for endpoints that are prepared to handle implicitly authenticated requests from different origins. I would strongly discourage broad exceptions.

Explicit references

One of the best ways to avoid this whole problem is to not use implicit credentials at all. If all authentication is done via explicit credentials, you don’t have to worry about the browser adding a cookie you weren’t expecting. Explicit credentials can be obtained by signing up for an API token or via an OAuth flow. But either way, the important thing is that logging in to one site doesn’t allow other sites to use those credentials.

The best way to do this is to have an authentication token in the Authorization header.

Authorization: Bearer chiik5TieeDoh0af

Using the Authorization header is a standardized behavior and will be handled well by many tools. For example, this header is probably removed from logs by default.

But most importantly, it must be explicitly set by all clients. This not only solves the XSS problem, but also makes multi-account support a breeze.

The biggest drawback is that explicit credentials are not suitable for server-rendered sites, because they are not included in the top-level navigation. Server-rendering is great for performance, so this technique is often not suitable.

SameSite Cookies

Although our server should ignore cookies in cross-origin requests, it is good practice to avoid including them in the requests in the first place. You could SameSite=Lax feature on all your cookies, causing the browser to omit them for cross-origin requests.

It is important to remember that Cookies are still included in top-level navigation, including form messages. You can use SameSite=Strict This prevents this, but the user will appear to be logged out on the first page loaded after following a cross-origin link (because that request does not contain cookies).

A simple policy that you can copy and paste is as follows:

Access-Control-Allow-Origin: *
Access-Control-Allow-Methods: *

That’s it, you’re done.

The effect of this policy is that other sites can only make anonymous requests, meaning that you are just as safe as if these requests were made through a CORS Proxy.

Shouldn’t I be more specific?

Probably not. There are a few reasons for this:

  1. You can create a false sense of security. Just because another web page running in a “properly functioning” browser can’t make these requests doesn’t mean they can’t be made. CORS proxies, for example, are very common.
  2. It prevents read-only access to your site. This can be useful for URL previews, feed fetching, or other functions. This results in more CORS proxies being executed, which only hurts performance and user privacy.

Note that CORS is not intended to block access, but to prevent implicit credentials from being unintentionally reused.

Why do I need to know all this stuff, why isn’t the web secure by default? Why do I have to deal with an ineffective policy that makes everything annoying by default without actually solving anything?

IDK, it’s pretty annoying. I think the reason mostly comes down to backwards compatibility. Sites built features around these vulnerabilities, so browsers tried to patch them as much as possible without breaking existing sites.

Fortunately, there may be some sanity on the horizon, with browsers finally willing to break sites for the good of the user. Major browsers are moving toward top-level domain isolation. They call it different things, Firefox calls it State Partitioning, Safari calls it Tracking Prevention, and Google likes cross-site tracking cookies, so they’ve implemented an opt-in CHIPS system.

The biggest problem is that these approaches are implemented as privacy features, not security features. This means that they cannot be relied upon, because they use heuristics to sometimes allow cross-origin implicit credentials. CHIPS is actually better in this regard because it is reliable in supporting browsers, but it only supports cookies.

So it seems that browsers are moving away from cookies that span top-level contexts, but it is an uncoordinated blunder. It is also not clear whether this will be done by blocking third-party cookies (Safari), or by partitioning (Firefox, CHIPS).


CORS is stupid – Kevin CoxFeed – Newsletter – Private Feedback