[personal profile] mjg59
One of the huge benefits of WebAuthn is that it makes traditional phishing attacks impossible. An attacker sends you a link to a site that looks legitimate but isn't, and you type in your credentials. With SMS or TOTP-based 2FA, you type in your second factor as well, and the attacker now has both your credentials and a legitimate (if time-limited) second factor token to log in with. WebAuthn prevents this by verifying that the site it's sending the secret to is the one that issued it in the first place - visit an attacker-controlled site and said attacker may get your username and password, but they won't be able to obtain a valid WebAuthn response.

But what if there was a mechanism for an attacker to direct a user to a legitimate login page, resulting in a happy WebAuthn flow, and obtain valid credentials for that user anyway? This seems like the lead-in to someone saying "The Aristocrats", but unfortunately it's (a) real, (b) RFC-defined, and (c) implemented in a whole bunch of places that handle sensitive credentials. The villain of this piece is RFC 8628, and while it exists for good reasons it can be used in a whole bunch of ways that have unfortunate security consequences.

What is the RFC 8628-defined Device Authorization Grant, and why does it exist? Imagine a device that you don't want to type a password into - either it has no input devices at all (eg, some IoT thing) or it's awkward to type a complicated password (eg, a TV with an on-screen keyboard). You want that device to be able to access resources on behalf of a user, so you want to ensure that that user authenticates the device. RFC 8628 describes an approach where the device requests the credentials, and then presents a code to the user (either on screen or over Bluetooth or something), and starts polling an endpoint for a result. The user visits a URL and types in that code (or is given a URL that has the code pre-populated) and is then guided through a standard auth process. The key distinction is that if the user authenticates correctly, the issued credentials are passed back to the device rather than the user - on successful auth, the endpoint the device is polling will return an oauth token.

But what happens if it's not a device that requests the credentials, but an attacker? What if said attacker obfuscates the URL in some way and tricks a user into clicking it? The user will be presented with their legitimate ID provider login screen, and if they're using a WebAuthn token for second factor it'll work correctly (because it's genuinely talking to the real ID provider!). The user will then typically be prompted to approve the request, but in every example I've seen the language used here is very generic and doesn't describe what's going on or ask the user. AWS simply says "An application or device requested authorization using your AWS sign-in" and has a big "Allow" button, giving the user no indication at all that hitting "Allow" may give a third party their credentials.

This isn't novel! Christoph Tafani-Dereeper has an excellent writeup on this topic from last year, which builds on Nestori Syynimaa's earlier work. But whenever I've talked about this, people seem surprised at the consequences. WebAuthn is supposed to protect against phishing attacks, but this approach subverts that protection by presenting the user with a legitimate login page and then handing their credentials to someone else.

RFC 8628 actually recognises this vector and presents a set of mitigations. Unfortunately nobody actually seems to implement these, and most of the mitigations are based around the idea that this flow will only be used for physical devices. Sadly, AWS uses this for initial authentication for the aws-cli tool, so there's no device in that scenario. Another mitigation is that there's a relatively short window where the code is valid, and so sending a link via email is likely to result in it expiring before the user clicks it. An attacker could avoid this by directing the user to a domain under their control that triggers the flow and then redirects the user to the login page, ensuring that the code is only generated after the user has clicked the link.

Can this be avoided? The best way to do so is to ensure that you don't support this token issuance flow anywhere, or if you do then ensure that any tokens issued that way are extremely narrowly scoped. Unfortunately if you're an AWS user, that's probably not viable - this flow is required for the cli tool to perform SSO login, and users are going to end up with broadly scoped tokens as a result. The logs are also not terribly useful.

The infuriating thing is that this isn't necessary for CLI tooling. The reason this approach is taken is that you need a way to get the token to a local process even if the user is doing authentication in a browser. This can be avoided by having the process listen on localhost, and then have the login flow redirect to localhost (including the token) on successful completion. In this scenario the attacker can't get access to the token without having access to the user's machine, and if they have that they probably have access to the token anyway.

There's no real moral here other than "Security is hard". Sorry.

Date: 2022-12-01 02:33 am (UTC)
From: (Anonymous)
Doesn't this implicitly redefine phishing halfway through? You start with "can't steal your creds" and end with "well, you can oauth phish it".

You don't need the device code flow to send someone to a real login page and request consent for a bunch of scopes.

The device code flow does not end up with the attacker having your webauthn credential, but instead having the output of the oauth flow - access tokens.

Date: 2022-12-01 10:19 am (UTC)
From: (Anonymous)
Call it whatever you want, blorping for instance. The end result is that the attacker now has a highly-privileged credential, often a long lived one.

> You don't need the device code flow to send someone to a real login page and request consent for a bunch of scopes.
Correct, this is also a problem.

Date: 2022-12-06 03:23 pm (UTC)
From: (Anonymous)
The challenges Device Authorization Grant flow is something that has been a topic of discussion in the IETF OAuth working group. The protocol is being used outside the initial application areas where the original risk profile does not apply.

The exploits we are seeing are targeting the unauthenticated channel between the initiating device and authenticating device by changing the context to trick end-users into illicit consent grants. Although the technique borrows from social engineering techniques like phishing, it is not technically phishing, but a different class of attacks (the primary credential is never obtained, instead the user is tricked into granting consent, leaving the attacker in possession of access and refresh tokens.

These "Illicit Consent Grant" attacks are described in a new Cross-Device Security Best Current Practice document (see https://datatracker.ietf.org/doc/draft-kasselman-cross-device-security/) that was adopted by the IETF OAuth working group in November 2022, along with additional mitigations and a protocol selection guide to guide implementors to better alternatives for cross-device authentication and authorization scenarios.
From: (Anonymous)
The on-screen TV keyboard can be driven in Apple TV by and iPhone or iPad. I haven't tried it to guess at its flaws but I hope it's safe for everyone involved.

I'd like some way to have an encrypted tunnel between my Android phone and Android TV for exactly this reason, where I could conceiviably build an app that is a TV-type keyboard input as well as a touchscreen text-box and tunnel between the two over WireGuard/mDNS ... but I was ignorant of RFC8628 before this post and would want PKI via certs on my devices (which is to say that security is hard).

K3n.

localhost does not solve aws-cli

Date: 2022-12-12 11:10 am (UTC)
From: (Anonymous)
Redirect to localhost will not work very well if you have the SSO web browser on your local desktop but the aws-cli is running on a remote host you are accessing via SSH.

Profile

Matthew Garrett

About Matthew

Power management, mobile and firmware developer on Linux. Security developer at Aurora. Ex-biologist. [personal profile] mjg59 on Twitter. Content here should not be interpreted as the opinion of my employer. Also on Mastodon.

Expand Cut Tags

No cut tags