A lot of load balancers, proxies, WAFs, and other application-level devices don’t support trailers. It’s a persistent issue trying to get gRPC to work across larger networks.
Wait how the hell does that work? Even with -v it seems to be making exactly the same request.
EDIT: Looking at the source code, it’s putting escape codes on the trailer to delete the previous text and redraw it. This feels like it could potentially be exploited for some more nefarious uses
Yes, curl output isn’t escaped, you have to be aware of this, it allows fun like curl parrot.live too (see https://github.com/curl/curl/pull/6231#pullrequestreview-535533021). I do think -v probably should escape output, but too many people have tried to make curl have some better options here and not succeeded.
It’s really fun seeing the hints that trailers were an afterthought. Many languages reuse the same types created for headers for trailers. From go’s http client:
Cap’n Proto is quite nice. My only complaint is that the core C++ API is really unpleasant to use, viz:
it tosses out the entire C++ standard library and makes you learn & use its own bespoke one, including a rather complex implementation of promises/coroutines, and
the APIs for packing and unpacking messages are IMO unnecessarily complicated and awkward.
But those are probably not issues when using the other language bindings.
Flatbuffers is a decent alternative. It doesn’t have nearly as many features as Cap’n Proto, but it has a much smaller API surface as a result, and it still has the same core advantage: binary serialization with fields that can be read directly from memory without decoding.
Can’t help feeling that that 30% is likely largely achieved though Google implementations. Even if it performs better and even if it has positive security profile implications, I still feel quite a strong resistance to having the entire web architecture moving over to Google’s design. Especially with so many people using Go on the server side and Chrome on the client. Especially when Go was explicitly designed as a means for Google juniors to churn out more code to help them be more dominant. Especially when it’s not just the application layer any more, nope, we’re going to redesign the transport layer for you too, and we’re going to push it out there using our dominant software shares and you’re going to like it so much you’ll write blog posts about it. Just as I’m inclined to accept the reality of some amount of crime in the world as a compromise if everyone can have proper privacy, I’m tempted to forego the performance improvements and maybe even security enhancements if we as a society can maintain some vague sense of software plurality and even independence in the face of ever-increasing technological hegemony.
I’ve heard many times that Go isn’t actually used that much within Google and the vast majority of code is written in C++.
From the load balancer side, most of the CDN and cloud providers support HTTP/3: Cloudflare, fastly, Akami, Amazon Cloudfront, GCS Load Balancer, Azure load balancer.
Because of these facts, I feel like characterizing this 30% number as “google just forcing this on us all” is just inaccurate from the deployment side of things.
You probably have a point that Google has had a incredibly large amount of input into what HTTP/2 and now HTTP/3 specs look like. However, who else was pushing to make HTTP faster that could have actually succeeded? If no one wanted to implement HTTP/3, none of the things I talk about above would be true. It would really just be Chrome and Google services, but it’s not. It would just be a weird google project that no one besides google employees care about. There are certainly many of those.
All good and fair points! None of which change the fact that Google do push things through by dint of their size and punching weight, especially in standards, and certainly don’t change the fact that I feel uncomfortable about a company who does that dominating the tech landscape so much, especially when it comes to core infra and architecture. The question of whether it’s good enough for other people & companies to want to adopt (especially when it’s handed to them for “free”, hey look we’ve done all the work for you) is almost an aside to my feeling/point, which is a social and political-cultural one rather than a technological one.
Actually, you know what, on re-reading your post I think your last point is a particularly good one. I guess it comes down to how much the third parties contributed to the tech & specs and how far they implemented something broadly complete that was handed to them. The further towards the latter, the more justified my feeling, and vice versa. Thanks for a good counter-argument.
Especially when Go was explicitly designed as a means for Google juniors to churn out more code to help them be more dominant.
This reminds me of when people used to claim that Go was only successful because Google’s entire marketing team prioritized it. Like Go is not some big project within Google; it was something a few engineers thought about and pitched to the company enough to get a team working on it. Google probably devotes comparable resources to Dart, Python, C++, etc. Google leadership, marketing team, etc probably doesn’t even know Go exists.
And the bit about it being designed for Google juniors has only ever been marketing to sell Go to egotistical C++/etc developers who believe that simpler tools are for inferior engineers (that they are Real Men™️ who can safely use a “power tool” without a “safety guard”). In other words, “Go isn’t for you, you brilliant person, it’s for those ‘junior’ engineers who keep messing up your beautiful C++ code.”
Your first para seems to broadly say “it wasn’t about the marketing” (which I didn’t say anything about, I just said that however it got out in the world, it was originally designed by Google for Google to make them more able to churn out more stuff), while the second says “oh that was all marketing”. Obviously it’s more nuanced but there does seem a tension there.
Even so, this is a quote about the rationale for designing Go from Rob Pike:
It must be familiar, roughly C-like. Programmers working at Google are early in their careers and are most familiar with procedural languages, particularly from the C family. The need to get programmers productive quickly in a new language means that the language cannot be too radical.
It must be modern. C, C++, and to some extent Java are quite old, designed before the advent of multicore machines, networking, and web application development. There are features of the modern world that are better met by newer approaches, such as built-in concurrency.
So not exactly lionizing those who use it. There’s also another quote attributed to Pike about it but the link commonly quoted has changed and the Wayback Machine seems down right now, but:
The key point here is our programmers are Googlers, they’re not researchers. They’re typically, fairly young, fresh out of school, probably learned Java, maybe learned C or C++, probably learned Python. They’re not capable of understanding a brilliant language but we want to use them to build good software. So, the language that we give them has to be easy for them to understand and easy to adopt.
Put all that together and it does seem that, regardless of the size of the effort inside Google, the intention was to create a straightforward, easy-to-learn language that allowed junior programmers to get productive quickly. Which in itself isn’t a bad thing, but it’s the specific point you picked up on & so the one that I’m responding to - and even if it’s not a bad thing, I don’t have to like it or want everyone to use it, and it’s not that weird or out-there for me or anyone to put it in the context of all the other big technological waves pushed through by Google and feel like maybe we as an industry (and perhaps beyond) might want to ensure that we don’t end up down a single track.
Your first para seems to broadly say “it wasn’t about the marketing” (which I didn’t say anything about, I just said that however it got out in the world, it was originally designed by Google for Google to make them more able to churn out more stuff), while the second says “oh that was all marketing”. Obviously it’s more nuanced but there does seem a tension there.
There’s no contradiction between “Go’s success isn’t attributable to Google’s marketing team” and “Rob Pike said a thing a couple times”. Also, I tried to make it clear that I was neither claiming nor implying you said anything about marketing, only that your comment reminded me of similar claims about Google marketing (in both cases, there is an implication that Go is part of some high level Google strategy as opposed to other tools that they use internally).
Even so, this is a quote about the rationale for designing Go from Rob Pike: … The very next point in the article says … So not exactly lionizing those who use it. There’s also another quote attributed to Pike about it but the link commonly quoted has changed and the Wayback Machine seems down right now, but:
Right, the first and third quotes are appeasement for developers of other languages. The second quote is irrelevant to this discussion (it’s about older languages lacking multicore support; that’s clearly not a remark about Go’s target audience one way or the other).
Put all that together and it does seem that, regardless of the size of the effort inside Google, the intention was to create a straightforward, easy-to-learn language that allowed junior programmers to get productive quickly.
No one disputes that this was among the goals, the debate is about (1) whether that was the sole or overriding goal for Go’s development (especially whether a simple language might also benefit more senior developers and (2) the extent to which Go was part of some larger, nefarious Google dominance strategy in a way that its other tools (e.g., Python, etc) were not.
I still don’t get why one should invest their time into this. The article seems to be implying some performance improvements, but no benchmark or number were provided. Google themselves have yet to implement http3 support in Go stdlib. They often claim that they are the biggest Go and Grpc adopters, and they often prioritize performance improvements with clear % wins.
I also don’t understand the rationale behind using ConnectRPC for grpc. The entire point of grpc is to define handlers as native functions, using structs as request response objects. So why do one want to go back to http handlers for grpc? I think the selling point of ConnectRPC is to be able to use protobuf to define http apis and not grpc services?
The article seems to be implying some performance improvements, but no benchmark or number were provided.
Yeah, agreed that this should go further. I do need to do some benchmarks, but quite literally the first step is making it work. My goals for this article were to provide an example of doing this and talking about the general ideas about why you’d want to use HTTP/3. Most of the things that make HTTP/3 “more performant” are related to the number of round trips required, which I feel like I explained decently.
HTTP/3 with Go in the stdlib is on its way in time. Maybe you shouldn’t spend time with this if you don’t care about the benefits? I just thought that this stuff is interesting and others might also be interested in this topic.
The entire point of grpc is to define handlers as native functions, using structs as request response objects. So why do one want to go back to http handlers for grpc?
Yep, ConnectRPC works similarly to this. I think you’re confusing something returning an http.Handler to requiring a user to implement an http.Handler. With ConnectRPC users implement RPC methods with typed input/output the same as grpc-go. The difference is that ConnectRPC converts this to an http.Handler that you can mount using your favorite mux/router library. This allows to use the same tooling as standard library instead of gRPC-specific tooling. For example, you can use “normal” http middleware with ConnectRPC.
In this article, I used go-quic’s http3 server and client along with connectrpc. This was only a trivial thing because ConnectRPC works nicely with net/http so I was easily able to work with quic-go’s http3.Server and http3.RoundTripper. This is absolutely not possible without a lot of effort with grpc-go.
Further, you can mount handlers along side it without making a new http.Server instance on a different port. grpc-go does support this but it’s experimental and much slower. Also ConnectRPC exposes gRPC-Web without the need for an additional load balancer deployment and network hop.
I think the selling point of ConnectRPC is to be able to use protobuf to define http apis and not grpc services?
I think you should re-evaluate what ConnectRPC actually is. It’s a complete replacement for gRPC. “Connect” is the protocol that ConnectRPC exposes alongside gRPC/gRPC-Web that’s more compatible and looks like a normal HTTP+JSON or HTTP+protobuf API for unary calls. Streaming calls still require some special framing. These three protocols (connect, grpc, grpc-web) all come “built in” with ConnectRPC so it gives you access to tooling for all three.
Personal opinion: all of this is only relevant if you’re operating at Google-ish scale, both in terms of load and staff size. Or, perhaps, if you’re running a very large JavaScript Web app that is comparable to a native app in terms of UX complexity.
Otherwise it’s just cargo-culting. If you’re not at the scale where HTTP/3 and gRPC make a difference, if you don’t have a horde of newly-hired junior engineers every year, if you don’t have a Web app you’re trying to optimise for size and load time: this seems to me like a complete waste of effort. Oh, and: if your Website isn’t monstrously bloated, these optimizations make no sense except again at Google-ish scale.
I disagree with this take simply because enabling HTTP/3 should be extremely trivial.
For Go, it is planned to be built into the stdlib http library. For web-exposed endpoints, many CDNs and load balancers already support HTTP/3: Cloudflare, fastly, Akami, Amazon Cloudfront, GCS Load Balancer, Azure load balancer, etc. On the client side, all major web clients (AKA browsers) support HTTP/3 already.
I’ve experienced issues related the head-of-line blocking in the wild, specifically with gRPC. I’ve had to mitigate this by opening more connections, which is totally what HTTP/2 was hoping to fix.
these optimizations make no sense except again at Google-ish scale
Again, if adding support for HTTP/3 was trivial (which it absolutely is in some contexts) then any optimizations you get out of it can be pretty small and still be worth it.
Trivial and simple are not the same thing; I notice though I didn’t clearly make that distinction in my (grumpy) reply.
Yes, implementing HTTP/3 in your code is trivial if it’s well supported by libraries (especially your standard library). The author makes a good job of demonstrating that.
What I’m railing against is the complexity which, if you’re not experiencing scale issues (and, if those issues aren’t the fault of other over-complexity), is best avoided.
Today I learned about HTTP trailers. What the fuck. It looks like they exist for pretty sensible reasons, but still, what the fuck.
A lot of load balancers, proxies, WAFs, and other application-level devices don’t support trailers. It’s a persistent issue trying to get gRPC to work across larger networks.
For non-sensible reasons, see the differences between
curl ip.wtf
andcurl -i ip.wtf
;-)Wait how the hell does that work? Even with -v it seems to be making exactly the same request.
EDIT: Looking at the source code, it’s putting escape codes on the trailer to delete the previous text and redraw it. This feels like it could potentially be exploited for some more nefarious uses
Yes, curl output isn’t escaped, you have to be aware of this, it allows fun like
curl parrot.live
too (see https://github.com/curl/curl/pull/6231#pullrequestreview-535533021). I do think-v
probably should escape output, but too many people have tried to make curl have some better options here and not succeeded.It’s really fun seeing the hints that trailers were an afterthought. Many languages reuse the same types created for headers for trailers. From go’s http client:
And there are gems like this in the HTTP/3 spec:
I’d love to see people exploring capnproto more too if they really want to optimize
Cap’n Proto is quite nice. My only complaint is that the core C++ API is really unpleasant to use, viz:
But those are probably not issues when using the other language bindings.
Flatbuffers is a decent alternative. It doesn’t have nearly as many features as Cap’n Proto, but it has a much smaller API surface as a result, and it still has the same core advantage: binary serialization with fields that can be read directly from memory without decoding.
Promise pipelining is probably my favorite feature of capnproto, not sure if flatbuffers has that
I’ve looked at FlatBuffers, but it doesn’t include an RPC system. There’s some integration with gRPC, but then I’d have to deal with gRPC…
The rust crate is pretty arcane IMO
I still hold out hope for better FOSS ASN.1 tooling…
Can’t help feeling that that 30% is likely largely achieved though Google implementations. Even if it performs better and even if it has positive security profile implications, I still feel quite a strong resistance to having the entire web architecture moving over to Google’s design. Especially with so many people using Go on the server side and Chrome on the client. Especially when Go was explicitly designed as a means for Google juniors to churn out more code to help them be more dominant. Especially when it’s not just the application layer any more, nope, we’re going to redesign the transport layer for you too, and we’re going to push it out there using our dominant software shares and you’re going to like it so much you’ll write blog posts about it. Just as I’m inclined to accept the reality of some amount of crime in the world as a compromise if everyone can have proper privacy, I’m tempted to forego the performance improvements and maybe even security enhancements if we as a society can maintain some vague sense of software plurality and even independence in the face of ever-increasing technological hegemony.
Woah, what an angle. I’m generally inclined to agree, but there are some facts that dampen your critique quite a bit.
HTTP/3 Is supported by every major browser (with a few caveats): https://caniuse.com/http3
I’ve heard many times that Go isn’t actually used that much within Google and the vast majority of code is written in C++.
From the load balancer side, most of the CDN and cloud providers support HTTP/3: Cloudflare, fastly, Akami, Amazon Cloudfront, GCS Load Balancer, Azure load balancer.
Because of these facts, I feel like characterizing this 30% number as “google just forcing this on us all” is just inaccurate from the deployment side of things.
You probably have a point that Google has had a incredibly large amount of input into what HTTP/2 and now HTTP/3 specs look like. However, who else was pushing to make HTTP faster that could have actually succeeded? If no one wanted to implement HTTP/3, none of the things I talk about above would be true. It would really just be Chrome and Google services, but it’s not. It would just be a weird google project that no one besides google employees care about. There are certainly many of those.
All good and fair points! None of which change the fact that Google do push things through by dint of their size and punching weight, especially in standards, and certainly don’t change the fact that I feel uncomfortable about a company who does that dominating the tech landscape so much, especially when it comes to core infra and architecture. The question of whether it’s good enough for other people & companies to want to adopt (especially when it’s handed to them for “free”, hey look we’ve done all the work for you) is almost an aside to my feeling/point, which is a social and political-cultural one rather than a technological one.
Actually, you know what, on re-reading your post I think your last point is a particularly good one. I guess it comes down to how much the third parties contributed to the tech & specs and how far they implemented something broadly complete that was handed to them. The further towards the latter, the more justified my feeling, and vice versa. Thanks for a good counter-argument.
This reminds me of when people used to claim that Go was only successful because Google’s entire marketing team prioritized it. Like Go is not some big project within Google; it was something a few engineers thought about and pitched to the company enough to get a team working on it. Google probably devotes comparable resources to Dart, Python, C++, etc. Google leadership, marketing team, etc probably doesn’t even know Go exists.
And the bit about it being designed for Google juniors has only ever been marketing to sell Go to egotistical C++/etc developers who believe that simpler tools are for inferior engineers (that they are Real Men™️ who can safely use a “power tool” without a “safety guard”). In other words, “Go isn’t for you, you brilliant person, it’s for those ‘junior’ engineers who keep messing up your beautiful C++ code.”
Your first para seems to broadly say “it wasn’t about the marketing” (which I didn’t say anything about, I just said that however it got out in the world, it was originally designed by Google for Google to make them more able to churn out more stuff), while the second says “oh that was all marketing”. Obviously it’s more nuanced but there does seem a tension there.
Even so, this is a quote about the rationale for designing Go from Rob Pike:
The very next point in the article says
So not exactly lionizing those who use it. There’s also another quote attributed to Pike about it but the link commonly quoted has changed and the Wayback Machine seems down right now, but:
Put all that together and it does seem that, regardless of the size of the effort inside Google, the intention was to create a straightforward, easy-to-learn language that allowed junior programmers to get productive quickly. Which in itself isn’t a bad thing, but it’s the specific point you picked up on & so the one that I’m responding to - and even if it’s not a bad thing, I don’t have to like it or want everyone to use it, and it’s not that weird or out-there for me or anyone to put it in the context of all the other big technological waves pushed through by Google and feel like maybe we as an industry (and perhaps beyond) might want to ensure that we don’t end up down a single track.
There’s no contradiction between “Go’s success isn’t attributable to Google’s marketing team” and “Rob Pike said a thing a couple times”. Also, I tried to make it clear that I was neither claiming nor implying you said anything about marketing, only that your comment reminded me of similar claims about Google marketing (in both cases, there is an implication that Go is part of some high level Google strategy as opposed to other tools that they use internally).
Right, the first and third quotes are appeasement for developers of other languages. The second quote is irrelevant to this discussion (it’s about older languages lacking multicore support; that’s clearly not a remark about Go’s target audience one way or the other).
No one disputes that this was among the goals, the debate is about (1) whether that was the sole or overriding goal for Go’s development (especially whether a simple language might also benefit more senior developers and (2) the extent to which Go was part of some larger, nefarious Google dominance strategy in a way that its other tools (e.g., Python, etc) were not.
I understand the rationale, but requiring TLS for services running locally/laptop/same VM can be a PITA at times. h2c was great for this.
[Comment removed by author]
I still don’t get why one should invest their time into this. The article seems to be implying some performance improvements, but no benchmark or number were provided. Google themselves have yet to implement http3 support in Go stdlib. They often claim that they are the biggest Go and Grpc adopters, and they often prioritize performance improvements with clear % wins.
I also don’t understand the rationale behind using ConnectRPC for grpc. The entire point of grpc is to define handlers as native functions, using structs as request response objects. So why do one want to go back to http handlers for grpc? I think the selling point of ConnectRPC is to be able to use protobuf to define http apis and not grpc services?
Yeah, agreed that this should go further. I do need to do some benchmarks, but quite literally the first step is making it work. My goals for this article were to provide an example of doing this and talking about the general ideas about why you’d want to use HTTP/3. Most of the things that make HTTP/3 “more performant” are related to the number of round trips required, which I feel like I explained decently.
HTTP/3 with Go in the stdlib is on its way in time. Maybe you shouldn’t spend time with this if you don’t care about the benefits? I just thought that this stuff is interesting and others might also be interested in this topic.
Yep, ConnectRPC works similarly to this. I think you’re confusing something returning an http.Handler to requiring a user to implement an http.Handler. With ConnectRPC users implement RPC methods with typed input/output the same as grpc-go. The difference is that ConnectRPC converts this to an http.Handler that you can mount using your favorite mux/router library. This allows to use the same tooling as standard library instead of gRPC-specific tooling. For example, you can use “normal” http middleware with ConnectRPC.
In this article, I used go-quic’s http3 server and client along with connectrpc. This was only a trivial thing because ConnectRPC works nicely with net/http so I was easily able to work with quic-go’s http3.Server and http3.RoundTripper. This is absolutely not possible without a lot of effort with grpc-go.
Further, you can mount handlers along side it without making a new http.Server instance on a different port. grpc-go does support this but it’s experimental and much slower. Also ConnectRPC exposes gRPC-Web without the need for an additional load balancer deployment and network hop.
I think you should re-evaluate what ConnectRPC actually is. It’s a complete replacement for gRPC. “Connect” is the protocol that ConnectRPC exposes alongside gRPC/gRPC-Web that’s more compatible and looks like a normal HTTP+JSON or HTTP+protobuf API for unary calls. Streaming calls still require some special framing. These three protocols (connect, grpc, grpc-web) all come “built in” with ConnectRPC so it gives you access to tooling for all three.
Personal opinion: all of this is only relevant if you’re operating at Google-ish scale, both in terms of load and staff size. Or, perhaps, if you’re running a very large JavaScript Web app that is comparable to a native app in terms of UX complexity.
Otherwise it’s just cargo-culting. If you’re not at the scale where HTTP/3 and gRPC make a difference, if you don’t have a horde of newly-hired junior engineers every year, if you don’t have a Web app you’re trying to optimise for size and load time: this seems to me like a complete waste of effort. Oh, and: if your Website isn’t monstrously bloated, these optimizations make no sense except again at Google-ish scale.
I disagree with this take simply because enabling HTTP/3 should be extremely trivial.
For Go, it is planned to be built into the stdlib http library. For web-exposed endpoints, many CDNs and load balancers already support HTTP/3: Cloudflare, fastly, Akami, Amazon Cloudfront, GCS Load Balancer, Azure load balancer, etc. On the client side, all major web clients (AKA browsers) support HTTP/3 already.
I’ve experienced issues related the head-of-line blocking in the wild, specifically with gRPC. I’ve had to mitigate this by opening more connections, which is totally what HTTP/2 was hoping to fix.
Again, if adding support for HTTP/3 was trivial (which it absolutely is in some contexts) then any optimizations you get out of it can be pretty small and still be worth it.
Trivial and simple are not the same thing; I notice though I didn’t clearly make that distinction in my (grumpy) reply.
Yes, implementing HTTP/3 in your code is trivial if it’s well supported by libraries (especially your standard library). The author makes a good job of demonstrating that.
What I’m railing against is the complexity which, if you’re not experiencing scale issues (and, if those issues aren’t the fault of other over-complexity), is best avoided.