classical HTTPS cerificates to authenticate your SSH3 server. This mechanism is more secure than the classical SSHv2 host key mechanism
I like the idea of trying something new, but between making claims like this without linking to full explanation and calling the thing SSH3 like it’s officially a new version… I’m not happy how they approach things.
It makes me wonder though, who has the right to name something SSH-3. Not legally, but rather who can we agree is the right set of people to assign that name. At least http3 existed under a different name and got adopted by standards body later.
I mean the name ssh is not really protection worthy, in the end the original implementation sshv1 apparently eventually became proprietary it self.
Though when it comes to the process, that’s some good feedback nevertheless. Perhaps someone should open an issue on their GitHub.
IETF initial working name for ssh-2 was “Secsh” and just later became ssh-2, so one might expect the same for anyone attempting to create a new ssh version.
In the end, I’m surprised there aren’t more attempts to lift ssh to a newer standard, a lot has moved. Now with ssh-3, but also with mosh and rise of the web, networked microcontrollers and so on there is definitively a great opportunity to do so.
At the same time, OpenSSH is like the cathedral on top of a mountain on an island. Carefully engineered, slowly crafted with a lot of thoughts spent on very important aspects (private keys encrypted in memory comes to mind), and whatever comes next should hopefully be on par with that later on.
In the end, I’m surprised there aren’t more attempts to lift ssh to a newer standard, a lot has moved
QUIC is the first time it was really worthwhile. The SSH protocol itself is fairly modular. You can plug in new encryption schemes, different hash schemes, and even different key types pretty trivially. Most of the evolution has been in that direction. We’ve moved from RSA to ECDH fairly smoothly over the last few years and you can store keys in TPMs or U2F tokens easily. SSH supports multiple channels and different out-of-band message types, so it’s easy to extend.
About the only thing that would require changes would be replacing TCP. SCTP never took off, so QUIC is the first thing that looks like a plausible replacement. QUIC bakes in a lot of the security at a lower level, so just running SSHv2 over QUIC is probably not useful and you need different encapsulation.
At the same time, OpenSSH is like the cathedral on top of a mountain on an island
It’s also not supporting the new versions of the various protocols. I recently implemented an sftp server and was quite surprised to discover that OpenSSH doesn’t support anything newer than v4 (the latest RFCs are for v7).
That said, it has an impressive security track record. There are a lot of CVEs but most of them are either in niche features or possible sandbox escapes that are only a problem if you couple them with an arbitrary-code execution vulnerability in the rest of the code. Few other things have the same track record.
I will recommend checking back on this project in a few years and see if issues continue to be uncovered and get better understood, or if this turns out to be another dead-end.
I mean the name ssh is not really protection worthy, in the end the original implementation sshv1 apparently eventually became proprietary it self.
I think it is protection worthy in the sense that any word with commonly understood meaning and connotations should be protected so that it can continue to be a useful word.
People understand SSH to be a standardized protocol that has been gradually developed over decades and is now ubiquitous on Unix machines. The transition from SSH-1 to SSH-2 respected people’s expectations in a way that SSH3 does not.
If a project’s name subverts expectations and creates the possibility for confusion, it’s not grounds for a lawsuit, but it reflects poorly on the project and will turn off a lot of people.
For example, this recently happened when blockchain bros tried to run off with the name “web 3”. That was the first thing I thought of when I saw that this new project was trying to land-grab the “ssh 3” name. It would have been a better first impression to pick a new name and let the SSH community figure it out.
While SSHv2 defines its own protocols for user authentication and secure channel establishment, SSH3 relies on the robust and time-tested mechanisms of TLS 1.3, QUIC and HTTP.
SSHv2 is more ‘time-tested’ than TLS 1.3 and QUIC which are very recent developments in comparison, no?
I think what they’re meaning to say there is if you want to implemented SSH3 you can do with relatively widely building blocks, rather SSHv2 bringing it’s own building blocks - which is why many programs simply use the ssh executable, than rather some native libraries for their programming language. Which will make writing an SSH3 compatible Paramiko like library potentislly less of an involved project. (Disclaimer I don’t know ssh super well enough to judge how complicated that would be though)
This really doesn’t seem more secure in my opinion. The underlying systems such as OIDC or whatever, are much more complex and likely to fail. They didn’t demonstrate a huge amount of advantages other than it works with their mental model better. Plus their code is not been audited with the same rigor as SSH has
This. North America<->Europe connections routinely have pings of 100-150 ms. Far from unusable, but the difference between 0.3 and 0.6 seconds is pretty palpable, especially when you’re in a hurry.
I’ve had times when I’ve used SSH (and more recently mosh) over connections with over a second of latency. Does it suck? Yes. Do I have a better option? No.
It depends on the use case. A close datacenter for me has a 12ms RTT (Azure region closest to me), so the difference is 48ms. I won’t notice that at all. Some systems I log into have 100ms RTTs. That’s just at the edge of where SSH latency is annoying and there the difference is going to be 480ms. That’s the difference between instant and perceptible pause.
As @insantybit points out, a bunch of tooling does ‘connect to a load of different machines over ssh and run this command’. If this is being done sequentially, for 100 machines that’s a 48s time difference, which is pretty noticeable. For 1000 machines, you’re probably not using tools that connect to each machine sequentially.
I think the big win for QUIC in this use case is ssh streams. With conventional ssh, these are subject to head of line blocking, so if you’re doing remote X11 and cat a large file in your terminal then a dropped packet in the file stream can cause X11 to freeze until the TCP window ends and you get a retransmission. With QUIC, these are independent streams at the protocol level and so the terminal output may pause but the X session remains responsive.
If the difference between 3 and 7 network round-trips is percievable to the user, the session will be near unusable due to latency.
I’ve seen a lot of ops tools that work by basically writing “run this command on boxes that match these properties” and under the hood use SSH. This can mean initiating 100s or 1000s of ssh’s. In those instances the improvement would be welcome.
I don’t think that’s a 1 to 1 comparison to just being faster. For one thing, that doesn’t work the first time, and it relies on holding tons of open connections to various servers. For another, it’s likely a really negative impact on your security since it would make SSH hijacking way easier (since you’re bypassing repeated authentication).
If this could be turned into a Caddy module, that may make it even more compelling. Caddy already automates certificates via Let’s Encrypt. But maybe there’s a tight coupling with HTTP/3 that makes this impossible, I don’t know.
There’s already an SSH server based on Caddy. https://github.com/kadeessh/kadeessh I think it would be interesting to see what it would take for them to add SSH3 support.
As I posted on the other link, I don’t really see huge advantage of this. There are lots of ways of decreasing the latency with SSH everything from mosh to other methodologies. They seem to be inventing a problem looking for a solution.
The docs suggested you could have a secret URL for your SSH3 endpoint but isn’t that going to immediately be visible to all the network infrastructure the request passes over?
Since it’s encrypted, nope, that’s not the case. It could (and probably would) be problematic if it was the domain name (since it’s usually unencrypted with SNI).
I like the idea of trying something new, but between making claims like this without linking to full explanation and calling the thing SSH3 like it’s officially a new version… I’m not happy how they approach things.
It makes me wonder though, who has the right to name something SSH-3. Not legally, but rather who can we agree is the right set of people to assign that name. At least http3 existed under a different name and got adopted by standards body later.
I mean the name ssh is not really protection worthy, in the end the original implementation sshv1 apparently eventually became proprietary it self.
Though when it comes to the process, that’s some good feedback nevertheless. Perhaps someone should open an issue on their GitHub.
IETF initial working name for ssh-2 was “Secsh” and just later became ssh-2, so one might expect the same for anyone attempting to create a new ssh version.
In the end, I’m surprised there aren’t more attempts to lift ssh to a newer standard, a lot has moved. Now with ssh-3, but also with mosh and rise of the web, networked microcontrollers and so on there is definitively a great opportunity to do so.
At the same time, OpenSSH is like the cathedral on top of a mountain on an island. Carefully engineered, slowly crafted with a lot of thoughts spent on very important aspects (private keys encrypted in memory comes to mind), and whatever comes next should hopefully be on par with that later on.
QUIC is the first time it was really worthwhile. The SSH protocol itself is fairly modular. You can plug in new encryption schemes, different hash schemes, and even different key types pretty trivially. Most of the evolution has been in that direction. We’ve moved from RSA to ECDH fairly smoothly over the last few years and you can store keys in TPMs or U2F tokens easily. SSH supports multiple channels and different out-of-band message types, so it’s easy to extend.
About the only thing that would require changes would be replacing TCP. SCTP never took off, so QUIC is the first thing that looks like a plausible replacement. QUIC bakes in a lot of the security at a lower level, so just running SSHv2 over QUIC is probably not useful and you need different encapsulation.
It’s also not supporting the new versions of the various protocols. I recently implemented an sftp server and was quite surprised to discover that OpenSSH doesn’t support anything newer than v4 (the latest RFCs are for v7).
That said, it has an impressive security track record. There are a lot of CVEs but most of them are either in niche features or possible sandbox escapes that are only a problem if you couple them with an arbitrary-code execution vulnerability in the rest of the code. Few other things have the same track record.
I’m not convinced they’re equivalent. There’s a lot of stuff suggesting QUIC is weak against things TLSv3 is strong to e.g. https://link.springer.com/article/10.1007/s00145-021-09389-w and https://link.springer.com/chapter/10.1007/978-981-15-4474-3_51 and it seems possible that this sort of change actually creates a difficult to understand vulnerability.
I will recommend checking back on this project in a few years and see if issues continue to be uncovered and get better understood, or if this turns out to be another dead-end.
I think it is protection worthy in the sense that any word with commonly understood meaning and connotations should be protected so that it can continue to be a useful word.
People understand SSH to be a standardized protocol that has been gradually developed over decades and is now ubiquitous on Unix machines. The transition from SSH-1 to SSH-2 respected people’s expectations in a way that SSH3 does not.
If a project’s name subverts expectations and creates the possibility for confusion, it’s not grounds for a lawsuit, but it reflects poorly on the project and will turn off a lot of people.
For example, this recently happened when blockchain bros tried to run off with the name “web 3”. That was the first thing I thought of when I saw that this new project was trying to land-grab the “ssh 3” name. It would have been a better first impression to pick a new name and let the SSH community figure it out.
Old news is old, but “ssh” is a registered trademark: https://www.linux.com/news/ylonen-we-own-ssh-trademark-heres-proposal/
SSHv2 is more ‘time-tested’ than TLS 1.3 and QUIC which are very recent developments in comparison, no?
I think what they’re meaning to say there is if you want to implemented SSH3 you can do with relatively widely building blocks, rather SSHv2 bringing it’s own building blocks - which is why many programs simply use the ssh executable, than rather some native libraries for their programming language. Which will make writing an SSH3 compatible Paramiko like library potentislly less of an involved project. (Disclaimer I don’t know ssh super well enough to judge how complicated that would be though)
Calling this “SSH3” seems awfully bold.
This really doesn’t seem more secure in my opinion. The underlying systems such as OIDC or whatever, are much more complex and likely to fail. They didn’t demonstrate a huge amount of advantages other than it works with their mental model better. Plus their code is not been audited with the same rigor as SSH has
should be merged into https://lobste.rs/s/svptcn/ssh3_ssh_using_http_3_quic
If the difference between 3 and 7 network round-trips is percievable to the user, the session will be near unusable due to latency.
The web tech auth aspect looks pretty cool though.
In Australia I regularly use SSH interactively on hosts with 250+ ms RTT. Just another day on the internet.
Mosh is your friend I imagine, it’s certainly mine.
This. North America<->Europe connections routinely have pings of 100-150 ms. Far from unusable, but the difference between 0.3 and 0.6 seconds is pretty palpable, especially when you’re in a hurry.
I’ve had times when I’ve used SSH (and more recently mosh) over connections with over a second of latency. Does it suck? Yes. Do I have a better option? No.
It depends on the use case. A close datacenter for me has a 12ms RTT (Azure region closest to me), so the difference is 48ms. I won’t notice that at all. Some systems I log into have 100ms RTTs. That’s just at the edge of where SSH latency is annoying and there the difference is going to be 480ms. That’s the difference between instant and perceptible pause.
As @insantybit points out, a bunch of tooling does ‘connect to a load of different machines over ssh and run this command’. If this is being done sequentially, for 100 machines that’s a 48s time difference, which is pretty noticeable. For 1000 machines, you’re probably not using tools that connect to each machine sequentially.
I think the big win for QUIC in this use case is ssh streams. With conventional ssh, these are subject to head of line blocking, so if you’re doing remote X11 and cat a large file in your terminal then a dropped packet in the file stream can cause X11 to freeze until the TCP window ends and you get a retransmission. With QUIC, these are independent streams at the protocol level and so the terminal output may pause but the X session remains responsive.
I’ve seen a lot of ops tools that work by basically writing “run this command on boxes that match these properties” and under the hood use SSH. This can mean initiating 100s or 1000s of ssh’s. In those instances the improvement would be welcome.
Ah, yeah I can see that as a place where this would matter. I’m usually SSHing to one box at a time, didn’t think about this use case.
OpenSSH supports connection sharing to avoid this overhead.
Can you elaborate? How would connection sharing help with opening connections from host A to hosts {B, C, D, …} ?
Open the connection once and keep it open. Then reuse it when you need to run a new command.
I don’t think that’s a 1 to 1 comparison to just being faster. For one thing, that doesn’t work the first time, and it relies on holding tons of open connections to various servers. For another, it’s likely a really negative impact on your security since it would make SSH hijacking way easier (since you’re bypassing repeated authentication).
3 round trips actually is a bit sad. I would have expected 0RTT from Quic, that’s the whole point of it after all.
I don’t think I’d want the compromises of 0RTT for something like SSH.
If this could be turned into a Caddy module, that may make it even more compelling. Caddy already automates certificates via Let’s Encrypt. But maybe there’s a tight coupling with HTTP/3 that makes this impossible, I don’t know.
There’s already an SSH server based on Caddy. https://github.com/kadeessh/kadeessh I think it would be interesting to see what it would take for them to add SSH3 support.
As I posted on the other link, I don’t really see huge advantage of this. There are lots of ways of decreasing the latency with SSH everything from mosh to other methodologies. They seem to be inventing a problem looking for a solution.
The docs suggested you could have a secret URL for your SSH3 endpoint but isn’t that going to immediately be visible to all the network infrastructure the request passes over?
Since it’s encrypted, nope, that’s not the case. It could (and probably would) be problematic if it was the domain name (since it’s usually unencrypted with SNI).
[Comment removed by author]