QUIC’s best trick is to allow a client and server to send data, even if they have never connected.
Just like X.25's "fast select" from the 1970s?
Quick UDP Internet Connections (QUIC) have graduated to Internet Engineering Task Force’s standards track. The QUIC spec, aka RFC 9000, appeared on May 27th, marking the end of the beginning for a story that started in 2013 when Google revealed it was playing with QUIC, which it then described as "an early-stage network …
Several years ago I wrote a short piece on an observed feature in the industry I was working in (oil&gas exploration). It was prompted by a presentation at a big conference showing a new way of improving project management - a technique I’d been using some 15 years earlier.
I noticed that, in many of the major project management organisations, few people stayed in a post for more than 5 years, many no more than 3. I has also been able to observe staff hangovers in a hospital ward for a couple of years (a family member with a long term stay - now out and about). There were 3 shifts in the ward and I noticed a number of times where the handover meetings missed something (fortunately not life-threatening, but still a concern). I compared this to the offshore regime where there are only 2 shifts. In the latter, you are usually handing over to the person who handed over to you; this didn’t happen in the former, and missed items (fortunately very few) were in the second handover. I put this down to the fact that the second shift, whilst aware of something, didn’t own the issue and wouldn’t put the same importance on it; consequently, they wouldn’t put the same emphasis on it. The saving grace was that most points didn’t become significant until it got to the third shift (who were already aware of it).
Now apply this to the corporate situation where a problem arises and is solved. The situation (problem and solution) is passed onto the next generation but, when it comes to the one after that, the reason for the solution is lost and the problem comes back. After all, it often came about because it was the easier way and the solution needed a bit extra.
Taking an average of 4 years in post, we shouldn’t be surprised when wheels start getting reinvented after 12 years…. I called it by a made-up word: hystery.
Lifers not being kept is a thing I see more in metropolitan based companies or at least for those staff based at metropolitan offices as they often have a bigger choice of employers and better advancement options, those based out in the sticksare less inclined to move on as there often only one or two companies that pay the rates required or have large enough estates to be interesting, and moving companies can often mean moving house or looking at a long commute. In my own company, my own 20 years there is small fry compared to some, 30-35 is not unknown, none of whom are based in the major city offices.
Cities are a great place to locate if you want a large pool of employees to choose from.
Cities are a great place to locate if you want a large pool on employers to choose from.
It works both ways.
Like many oldies in the industry, now in my 4th decade, I sit at my desk and look at all the new buzzwords flying round, and 'new' concepts and think......
WTF are they talking about? It's not new, it's a reinvention of the wheel from before that is different in one of the following ways.....
0. It's round, has 5 spokes....
1. It's round, it has ten spokes...
2. It's square, has 8 spokes....
No such thing as new in this game... it's constant rehash, new shiny coat, s**t underneath, with the utopian promise that it's faster and cheaper, but after 6 months in real worlds usage....... isn't
You should read the actual RFCs. It's clear you haven't understood any of it.
TCP does exactly nothing to prevent packet injection. Anyone in the route can swap a few packets if they feel like it, and there's no way for the other end to detect it. An application can only defend against those attacks by layering security on top of the stream - usually TLS.
QUIC requires TLS.
The only way to spoof QUIC packets is to break the encryption or poison the certificate chain. Not impossible of course, but no less difficult than breaking HTTPS.
...the pieces can be fingerprinted by address and size, which encryption cannot conceal.
There's a solution for that, ie salting traffic to make traffic analysis harder. But then the point is to minimise competitors fingerprinting efforts. It also kinda misses the point of UDP vs TCP. One offers some form of reliable networking, the other doesn't. But if you're billing by network usage, converting TCP sessions into QUIC increase traffic, especially if there's congestion, packet loss, and data has to be retransmitted.
Whether that makes a session 'faster' is debateable, especially if it's depending on applications to notice packet loss than network devices. There's also a few other potential inverse efficiency snags if sessions are broken into multiple streams, and 'goodput' reduced with more headers than payload. Plus extra fun for silicon and buffer tuning, if 1 session becomes 10 small-packet 'spray & pray' communications.
"It also kinda misses the point of UDP vs TCP. One offers some form of reliable networking, the other doesn't. "
Counterpoint: TCP is layer 4 and can't ensure reliability if layer 1 (the physical layer) is unreliable, and wireless networks are usually less reliable.Plus a single connection usually means a single thread, reducing parallelism potential.
"Whether that makes a session 'faster' is debateable, especially if it's depending on applications to notice packet loss than network devices."
Consider that back when the protocols were first deployed, local parallelism wasn't exactly en vogue. Now, most applications are expected to be multithreaded and multitasking, able to prioritize and know what's important and what's not. Otherwise, you're going to get into a debate over which has a better idea of how to prioritize: the application layer or the protocol layer?
Yup. I can kinda see the point to QUIC inside datacentres or in supercomputer environments where parallelism can be handy. On congested public networks, far less so. Or even in congested private/virtually public. But that's always been one of those challenges with the IP suite compared to alternative protocols.
Right. As best it can, TCP creates reliability over an unreliable path, although there is always a residual probability of failure. But if you want transactional integrity, you have to add a 2-phase commit on top of TCP (or whatever other transport protocol you use).
>but background images? Many other images? A lot of it is nice but not essential, so UDP should be OK for them; if they get lost, oh well.
Those are the most important parts that must not be lost - think of the lost ad revenue!
I hope with QUIC webpages will be able to load without having to wait for all the bloat that turns 3k of content into 70MB from 50+ sources.
QUIC, like TCP and many other protocols, takes an unreliable way of sending packets (ethernet frames, UDP packets) and creates reliability over the top.
There are a lot of different ways to do that.
TCP has a lot of known problems when running over congested links, latency rises and throughput falls exponentially. That can easily end up with everything waiting for a favicon to get downloaded before any of the actual useful content, mainly due to the current insane way web pages are assembled...
QUIC is supposed to be "better", though I'm not yet sure what its pitfalls are.
I've used several other reliable-over-UDP protocols that fix TCP's major problems and replace them with their own new and exciting major problems.
This is shocking news. Such a thing has never been heard of before. Who do I complain to? Oh, I can't because Google's support is non-existent and the W3C are powerless against Google.
Still, the good news is they'll get bored after a year and drop it after messing everyone else around.
If it gets rid of TCP, I will be pleased.
And if Microsoft's open source version, MsQuic, can supplant Google's version, I will applaud.
Giving kudos to Microsoft hurts my brain but lately they seem to be earning them. At least their naming committee is still stupid. I suppose we should be glad that they didn't name it Quicky.
Get rid of javascript and all 'load resource from a totally different domain' rubbish.
I remember when we used to fine tune the raw html to get page load times down. Now it seems nobody cares and they just load routines at random without caring what performance/security holes they create, just to make it 'cool-looking'.
is a protocol highly optimised for web use, and that's fine.
TCP is a general purpose 'reliable' protocol that has been around for about 45 years and still works pretty well.
Can you make a new protocol better in some circumstances? Absolutely. Will Quic become the backbone of the internet for the next 45 years? No. And personally I'm not a big fan of designing a data exchange protocol around a set of user-space application requirements, because we'll all have shortly moved on.
Moved on to what? I think it's fair to say that delivering content like webpages to browsers will be around for quite some time yet. Yes the internet is changing at a crazy pace but I think this is dealing with a problem that will be relevant for ages.
Other options: TOR? BitTorrent?
I don't think so
I agree with your point about TCP being general purpose - although it has been heavily optimised for two (conflicting) use cases (one way transfer of bulk data as quickly and efficiently as possible, and interactive "telnet/ssh" use - at least in the days when I was involved).
But given the importance of web traffic it is reasonable to think that TCP may not be able to be really optimised for that and a new transport protocol could be more efficient.
There are other transport layer protocols designed to optimise niche applications (such as SCTP for SS7). One for the web seems likely to have quite a long lifetime.
One way to break fingerprinting would be using standardized libraries that obscure packet rhythms. It could assign some small random amount of buffering latency to each end-to-end route that is used to maximize packet size. This buffering would make the traffic between different sites more similar, hopefully with a smaller variance than the random buffering latency assigned to the route.
No doubt that Google's solution will be to flood apps and servers with their own free QUIC implementation that maximizes fingerprinting. If it really is a layer on UDP, apps could implement QUIC themselves in a snooping-friendly manner rather than rely on a more secure implementation in the OS.
TLS has a lot of baggage. TCP was designed in the days when buffers were small and serial links were slow and memory was limited. Furthermore congestion managment didn't take into account the time, as Codel does. The real shame is that there is no version based on the Noise protocol
I’m sorry, I just don’t get this.
For years all I’ve seen is how a tweak to the connection part of the data is the holy grail for speed and amount of data used; it will solve all issues… this is just smoke and mirrors.
In reality, we have to wait for multiple dns lookups for ad networks, then all the auctions to complete before advertiser then decides which to serve, then of course the data transfer for the ad, often before the page will render fully.
When you only have 1 bar of signal and trying to get an address or so because you are lost, then what would really speed it up is dropping all the crap.
Firefox's implementation of HTTP/3 with QUIC is going live this week too, so that's another point that'll drive adoption. I've been using it for a year solid, and sporadically before that, and when it works, it works great. (When it doesn't, it takes extra refreshing and it's really annoying. Twitter, for instance, has a terrible HTTP/3 server.)