-
-
Notifications
You must be signed in to change notification settings - Fork 7.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Onion service related changes to HTTPS handling #15560
Conversation
Looks like tests are failing :( this needs a bit more work it seems. |
Gemfile.lock
Outdated
@@ -808,3 +812,9 @@ DEPENDENCIES | |||
webpacker (~> 5.2) | |||
webpush | |||
xorcist (~> 1.1) | |||
|
|||
RUBY VERSION | |||
ruby 2.7.2p137 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not required.
60d0c91
to
f2b8f29
Compare
Thanks, I cleaned up I understand if the reviewers have some misgivings about this. Another option is to add another Rails environment called "onion" that will be identical to the "production" environment, except it will have forced redirects in the code disabled. It would be used only by instance operators that want to set up their instance as an onion and clearnet service. Thoughts/preferences? |
Hey, just checking in to see if there's something I can do to move this forward. @shleeable @Jenn650 do you have any opinions on the strategy taken here? |
Usage of secure_headers gem in here is superiour to #15666 because in that PR your server can only either run through tor or not, while here both access methods would be supported, but the Ideally one of you would combine them into one. |
@Gargron do you want me to rebase this on the latest master? |
Yeah there's a conflict in Gemfile.lock that prevents merging. |
Tested .onion federation and it works. |
I think the use of
EDIT: Updated comment wrt. cookies |
Thanks for looking into this.
Can we solve this by adding
I already did that for the content security policy, but I was bit hasty in not checking the other values.
Yeah, TBH this solution isn't ideal. My understanding of secure headers is that it should be a browser-side thing, where browsers decide whether or not to accept secure cookies depending on if the connection is secure (which usually means HTTPS, but In the case of Tor Browser, HTTP connections to onion sites are treated by the browser as secure). But from what I can tell, Ruby on Rails is doing some extra server-side work and not sending secure cookies over plaintext HTTP connections, causing this to not work properly. In an ideal world, we should be able to always set these as secure and it will just work for HTTP onion sites. I do think it's important to keep support for HTTPS onion services, but am willing to defer to your judgement on what the best way to proceed here is (maybe going with something closer to @dr-bonez solution for now and filing an issue with Ruby on Rails upstream)? |
it will? I have to admit I don't know much about how the TorBrowser handles that, but if it indeed works for onion sites even on HTTP, why not revert that part and use secure cookies unconditionally? |
It doesn't work right now because of the server side issue with Ruby on Rails. The mastodon server won't send secure cookies over an HTTP link at all, causing things to break. What I meant was that in an idea world, the secure cookies header would all be handled browser-side and if it was Tor Browser would already accept them because of this patch: https://bugzilla.mozilla.org/show_bug.cgi?id=1633015 |
So in order to get this to work, we might have to file an upstream issue with Ruby on Rails to get them to send secure cookies regardless of the connection, or make an exception for .onion sites. |
As far as I understand, that isn't an issue with Rails, but with the Since Torbrowser seem to consider |
I tried that before on a production deployment, and it didn't work, even without the secure headers gem. My guess is it's an issue with both the gem and with Rails. |
Even @dr-bonez had to disable secure headers for .onion sites, for the same reason. So, if you're concerned about the affects this has for non .onion sites, we could go with their solution of making the call on whether or not to set the secure flag based on whether the domain is a .onion or not. It makes https .onions a little less secure, but is perhaps a good compromise until Rails is fixed. |
I don't think it does make https onions less secure. With SameSite set to lax, as long as the main page has https, the cert is being validated, so your browser is verifying the authenticity of the of the cert. At that point, the domain has been verified, and anything that comes from that domain is trusted, regardless of transport protocol since the url is self authenticating. |
I will also note, the next version of nginx has a directive to modify the cookie flags, so this may become a moot point. We can just remove the secure flag from rails and set it in the nginx conf, after that's stable. |
@dr-bonez Authenticity is only meaningful if it's visible to the user; if a user is browsing an .onion domain of a site with an openweb version, they should be able to see that they correspond. If the user is redirected to the .onion site from the openweb site, or if there is ever a single resource loaded from the https version of the onion site, then yes the browser could tell that the sites correspond even once https is gone -- but the user wouldn't, so that doesn't give you much. Basically, persistent https grants you the ability to mitigate someone mirroring your dual openweb/onion service instance as an onion service (potentially similarly named ;)), and the adversary surreptitiously linking it to victims as authentic. A user can click, at any time, the onion button while browsing https://www.facebookcorewwwi.onion/ and be able to tell if they accidentally clicked a link to https://www.facebookmaincomi.onion/ or whatever, without having to memorize the full legit facebook onion address. (Also, my experience accords with the above comments, that rails refuses to send secure cookies over http, and that this can be fixed with bleeding-edge nginx in lieu of that portion of this PR, but who knows how long it'll be before that's in distros like Debian Stable.) |
I understand the point about why https is useful for onion sites. My point is that the secure flag on cookies isn't really necessary here. The point of that flag is to prevent someone from letting you access the real site, so you get your cookie issued, then spoofing the http version and redirecting you to it, so that your browser automatically sends your cookie to the attacker. In this case, the same site flag would prevent the cookie from being sent to any domain that's different, regardless of whether it looks similar. The user doesn't decide when cookies get sent, so they can't be tricked into sending cookies to a bad domain. |
Ahh, my mistake, yes that sounds right to me. |
Cool, going with @dr-bonez 's approach to only disable secure cookies for .onion sites sounds like the best solution here. |
Oh, ok, I misunderstood that Rails dropped the What It's probably possible to monkey-patch |
If the only thing we use from secure_headers is the cookie thing and we're going to monkey-patch it why not just write our own Rack middleware? |
Good point. We'd have avoided the cookie parsing stuff I guess. |
* Enable secure cookie flag for https only * Disable force_ssl for .onion hosts only Co-authored-by: Aiden McClelland <[email protected]>
This fixes the way mastodon is loaded over onion services.
This change will allow the onion service version of a mastodon instance to load everything over http while keeping clearnet access of the site https only.
Shoutout to @psjbeisler for finding this guide for onion services and rails by riseup.net: https://riseup.net/security/network-security/tor/onionservices-best-practices#onion-services-and-rails-4
The changes in this pull request are in line with the advice listed there. Importantly these changes won't make access to clearnet (non onion service) versions of the site less secure for the following reasons:
force_ssl
directive merely removes an additional layer of redirects. This is taken care of already in the nginx config through a redirect from http to https and the setting of HSTS headers.See #15501 for more discussion