Weird take. Obviously you could do everything in untyped lambda calculus, but that sucks. Likewise generics significantly reduce the need to use the any type, eliminating most remaining untype checked code. Adoption is low because it’s used with restraint.
Likewise generics significantly reduce the need to use the any type, eliminating most remaining untype checked code
There’s absolutely zero doubt that generics are useful. Literally everyone agrees with that!
What is also true is that generics have costs, across many dimensions. I think it’s also true that literally everyone agrees that generics are non-free.
What is curious though, is that it seems that there’s a sizable population who don’t acknowledge that cost. Which looks like:
Person A: generics are useful for X but they have costs C.
Person B: no, generics are useful for Y.
Another observation here is that the overall type system complexity is a spectrum. And this tradeoff-vs-zero cost dialog tends to repeat at any point on this spectrum during the language evolution.
Taking Rust as an example, I think the complexity of the trait system (type classes with type projections, including type function projections) is necessary complexity to make Rust’s attempt at safe, “zero-cost” concurrency viable. On the other hand, I can’t deny that some of the way I see the trait system used in the ecosystem (and even, unfortunately, in certain recent, dubious, as-yet-unstable additions the standard library) is awful library design and obviously the sophomoric obsession with “expressive type level programming” that Don Syme alludes to. I try to avoid depending on crates like this as much as I can, and my own code rarely ever involves defining new traits.
But how can we tame this jungle of expressiveness without losing the ability to prevent classes of bugs via automated, formal program analysis? One option perhaps is to have a clear distinction between the “application language” and the “systems language,” allowing you to omit features in each and create a simpler language in each category, whose features together combine via sum (application language + systems language) instead of product (application features * systems features).
Baring that, I would say that there is always the informal technique of culture development so as to avoid the use of terrible type programming. In other words, I want a language which can expression typenum, with a community of practitioners who are wise enough never to actually use typenum. I don’t see why that should be beyond us.
But how can we tame this jungle of expressiveness without losing the ability to prevent classes of bugs via automated, formal program analysis?
My gut feeling is that there’s still room for improvement language design wise, that we are still pretty far from expressiveness-complexity Pareto frontier.
That’s very unsubstantiated feeling, but two specific observations make me feel that way:
module-type class conflicts. Modules feel simultaneously more composable and simpler, while traits win on ergonomics. It feels like there should be some way to have both.
the way that Zig does generics, in the sense of convincing the compiler to generate the code you want, is refreshingly simple. But of course it doesn’t allow for automated modular reasoning. It feels like there should be some way to regain the modularity without adding a Turing tarpit type-level language.
One possible mitigation is culture and idiom. Maintainers have a lot of soft power here. They write the reference documentation that people read when onboarding to the language. Some slogan about “concrete is better than abstract” could do a lot of work (it would probably be invoked legalistically in many cases, but that seems like the lesser evil to me). I think culture does a lot of work to keep Go code simple even since generics have been released.
I can somewhat easily envision someone writing Rust closer to how they would write C. I could even see the Linux project implementing guidelines for Rust that aim to keep it simple, and maybe other projects adopt those guidelines. Maybe they get enough traction to influence Rust culture more broadly?
One possible mitigation is culture and idiom. Maintainers have a lot of soft power here.
We could probably do a better job with the language we use to describe some features. For example, calling a feature advanced implies that it’s users are not beginners. If you instead call a feature niche, you imply that (1) it is not meant to be commonly used and (2) people do not need to understand it to be productive.
I think it’s also true that literally everyone agrees that generics are non-free. What is curious though, is that it seems that there’s a sizable population who don’t acknowledge that cost.
I’m confused about the distinction between “everyone agrees that generics are non-free” and “there’s a sizable population who don’t acknowledge that cost”. If there are people who don’t acknowledge the cost, doesn’t that imply disagreement that generics are non-free?
Aside from that confusion, I agree. In many of the debates I’ve witnessed about Go and generics over the years, the rabidly anti-Go people (who have always been extremely numerous, i.e., I’m not exactly nut-picking) have typically refused to concede any downside to generics whatsoever. Similarly, they’ve refused to acknowledge any utility in standardization (“most Go code is written in a very similar way”). And in fairness, I didn’t really think much of the standardization property before I used Go in earnest either, but then again I didn’t go around arguing decisively against it as though I had some kind of expertise with or perfect knowledge about it.
By contrast, within the Go community there has usually been a pretty robust debate about costs and benefits. Marginally few people would argue that generics confer no benefit whatsoever, for example.
I might be using wrong words here, so let me make this more specific. Let’s take a simple statement “generics make compiler harder to implement”. I can see people expressing the following positions:
no, generics don’t actually make compiler harder to implement (disagreement that there’s a cost)
generics make the compiler just a tiny bit more complex (disagreement about the magnitude of the cost)
generics make the compiler more complicated, but you need to write the compiler once and you use generics daily (statement about tradeoff)
generics make the compiler more complicated, but that’s irrelevant so I won’t be even addressing that (knowing, but not acknowledging the cost).
From my observation of such discussions, I think position 4 is puzzlingly prevalent. Well, maybe I am misunderstanding thing and that’s actually position 1, but I find that unlikely. But also that’s squishi human theory-of-mind stuff, so I might be very wrong here.
I think part of the disconnect here is that apparently you’re very concerned about complexity in the compiler. I didn’t realise that was your concern until this comment.
Obviously a slow, buggy compiler is bad, but I generally view the role of the compiler as eating complexity off the plate of the programmer; instead the costs I’m thinking about are those that fall on the users of compilers.
No, I am not very concerned about implementation complexity. I use this as the first example because:
this is something I am unusually familiar with
this is the most clear cut and objective cost, which is important for the meta discussion of ignoring costs
As I’ve written in my other comment(and, before that, at https://matklad.github.io/2021/02/24/another-generic-dilemma.html), my central concern is the ecosystem cost in the form of more-complex-than-it-needs-to-be patterns and libraries. And, to repeat, Go is an interesting case here because it is uniquely positioned to push back exactly against this kind of pressure.
One cost to the “generic” generics I am very well aware of. I was at JetBrains when support for Rust and Go was added to the platform, what was later to become GoLand and RustRover.
For Go, the story was that the language semantics was just done at one point, and devs continued to hack on actual IDE features. For Rust, well, we are still trying to cover the entire language! I don’t think either rust-analyzer or RustRover support all type system features at the moment?
Now, current Go generics are much simpler than what Rust had circa 2015, so the situation is by far not that bad. But this is still a big increase in implementation complexity relative to what’s been there before.
What I think is the biggest cost is the effect on the ecosystem. You can’t actually stick to first-order code, because the libraries you use won’t! There’s definitely a tendency to push ecosystem towards complexity, which plays out in Rust for example. See, e.g., boats comment. But here I can confidently say you don’t need to go all the way up to Rust to get you into trouble — I’ve definitely lost some time to unnecessarily generic Java 7. I am very curious how this plays out in Go though! Go has very strong focus on simplicity, so perhaps they’ll reign this social tendency in?
Another cost of generics is compile time or runtime. Generics are either slow to compile, or slow to run. This is also often a cost imposed on you by your dependencies.
Finally, there’s this whole expressively treadmill: languages start with simple generics, but then the supports tends to grow until past the breaking point. Two prominent examples here are Java type system becoming unsound without anyone noticing, and Swift type system growing until it could encode undecidable word-equivalence in a semi-group problem.
Now, current Go generics are much simpler than what Rust had circa 2015, so the situation is by far not that bad. But this is still a big increase in implementation complexity relative to what’s been there before.
Per your other comment, I’m in camp 3: it’s a one time cost, and unless it takes literal years to pay it, we can mostly discount it: the compiler team is tiny compared to the size of the whole community, so the only relevant cost I see here is opportunity cost: what else could the compiler team do instead of implementing generics, that would make life even better for everyone else?
Another cost of generics is compile time or runtime. Generics are either slow to compile, or slow to run.
I believe Ocaml disproved that a number of decades ago.
Its solution required a small compromise (to avoid heap allocating integers, their range is cut in half), but the result is dead simple: generic code can copy integers/pointers around, and that’s about it. Compile once, run on any type. And when the GC comes in, discriminating between integers and pointers is easy: integers are odd, pointers are even. The result is fast to compile and run, as well as fairly simple to implement. And if we needs natively sized integers, we can still heap allocate them (the standard library has explicit support for such).
Oh but you’d have to think of that up front, and it’s pretty clear to me the original Go devs didn’t. I mean I have implemented a statically typed scripting language myself, and I can tell from experience: generics aren’t that hard to implement. The Go team would have known that if they tried.
Finally, there’s this whole expressively treadmill: languages start with simple generics, but then the supports tends to grow until past the breaking point.
This slippery slope argument also applies to languages that don’t have generics to begin with. See Go itself for instance. The problem doesn’t come from generics, it comes from an unclear or expanding scope. If you want to avoid such bloat, you need to address a specific niche, make sure you address it well, and then promote your language for that niche only.
I’m pretty sure Go acquired generics for one of two reasons: either it wasn’t addressing the niche it was supposed to address well enough, and the compiler team had to say “oops” and add them; or people started using it outside of its intended niche, and the compiler team felt compelled to support those new use cases.
I believe Ocaml disproved that a number of decades ago.
Not really, Ocaml uses standard solution in the “slow code” corner of design space: everything is a pointer (with an important carve out for ints, floats, and other primitive types).
This is the case where generics make even non-generic code slower.
That’s fair, but I still have a trick up my sleeve: there’s an intermediate solution between full monomorphisation and dispatch/everything-is-a-pointer. In C++ for instance, most template instantiations differ by one thing and one thing alone: sizeof().
Which narrows the choice quite a bit. Now we’re choosing between compiling once per type size, and passing an additional parameter to the generic functions (the type size). Sure you won’t get the fancy stuff like copy constructors, but the compilation & runtime costs involved here are much, much tamer than the classic solutions.
It actually does, thanks for the link. And from the look of it, it does sound like it’s more complicated than just sizeof(), which I would expect if you want to support stuff like non-shallow generic copies. I’ll look it up.
I don’t think that’s true. That’s true of much more complex higher order type systems than go’s generics.
I think it is true of any possible implementation of generics. The two implementation choices are dynamic dispatch or monomorphisation. Dynamic dispatch includes a run-time cost, which may be small but also adds a bit more in terms of impeding inlining. Monomorphisation incurs a compile-time cost because you have to create multiple copies of the function.
Can you expand on that? I’m guessing that you mean that a program expressed without generics but solving the same problem will require more parsing and may end up being slower to compile?
Sorry, I just meant that even though monomorphisation takes non-zero time it doesn’t follow that overall compilation will be even perceptibly slower. Not free but maybe still cheap.
Monomorphisation incurs a compile-time cost because you have to create multiple copies of the function.
That and maybe some optimizations (during compile) can be done when the compiler is smart about generics rather than not knowing that similar things are actually the same.
The problem with C++ / Rust style generics is that monomorphized functions are generated at the use site, where the concrete types are known. They create lots of duplicate copies of monomorphized functions that must subsequently be deduplicated.
This is more of a problem for C++ than Rust, because C++ does this per compilation unit and so the linker typically throws away around 90% of the generated object code in the final link step. Rust doesn’t have the same separate compilation model, but still suffers from some of the same cases where the generated functions are identical and you need to merge them.
Or you do dynamic dispatch and have one copy of the function that can do the same work, just without any specialisation.
The tricky thing for a compiler (and I don’t know of any that are good at this) is working out when monomorphisation is a good idea. Both Rust and C# (and some ML dialects) have attributes that let you control whether something should be monomorphised. Go uses an approach where they monomorphise based on the size of types but do dynamic dispatch for method calls.
If I recall correctly OCaml does neither monomorphisation nor dynamic dispatch. With the possible exception of special cases like structural equality, generic code simply treat generic data as opaque pointers, and as such runs exactly as fast as a monomorphic version would have.
The GC does some dynamic dispatch to distinguish integers from pointers, and integer arithmetic does do some gymnastic to ignore the least significant bit (set to 1 to distinguish them from pointers), so the cost is not nil. As high as actual dynamic dispatch though? I have my doubts.
Time taken directly: 50.722233ms
Time taken using interface: 57.640016ms
Time taken using generics: 143.882627ms
There’s just more indirection with generics, relative to interface passing. And the alternative to generics is not always an interface — often it is “this code doesn’t actually need to be generic”. And at that point, the difference might be stark: either you don’t monomorphise, in which case the non-generic code, by virtue of inlining, becomes massively faster, or you monomorphise, and now your thing compiles many times slower.
You’re talking about a language that added generics after the fact. This adds constraints they likely wouldn’t have had if they did it from the start, and is much more likely to bias heavily against generics. Try the same with OCaml, I bet the results would be very different.
If you don’t have generics, most of the time, rather than using any, you refactor the code to not be generic. In any case, any is not slow:
func count_to_million_any(c interface{}) {
for i := 0; i < 100000000; i++ {
cc, _ := c.(*CounterImpl)
cc.increment()
}
}
Time taken directly: 42.200609ms
Time taken using interface: 34.227707ms
Time taken using generics: 143.833287ms
Time taken using any: 49.59427ms
This makes sense — it’s still static dispatch after the downcast, and the downcast should be speculated right through.
You do have to be careful about inlining and devirualization. In this case, the interface version is being inlined and devirtualized and so it ends up doing the exact same code as the direct version. Adding //go:noinline annotations to the four functions changes the results from (on my PC on Go 1.22.4)
Time taken directly: 21.522505ms
Time taken using interface: 21.692153ms
Time taken using generics: 129.865654ms
Time taken using any: 21.737464ms
to
Time taken directly: 21.559384ms
Time taken using interface: 128.452964ms
Time taken using generics: 128.384825ms
Time taken using any: 42.816321ms
which matches what I expected: generics are implemented using dictionary passing, so it’s a virtual dispatch call just like interfaces, and the any version is going to do slightly more work than the increment checking the type every loop.
People like to claim that Go isn’t a particularly smart compiler, but I find that it does do quite a bit of useful optimizations and instead chooses to skip out on those that are less bang for the buck. For example, in this case, it actually executes every loop and increments instead of just loading a constant like many C compilers would to (IMO) dubious benefit.
Typically the any type isn’t what we’d use in the absence of generics. I’ve written a lot of Go since 2012 and only a vanishingly small percentage of it uses any for generic use cases. Far more frequently, I just write type FooLinkedList struct { Foo Foo; Next *FooLinkedList } because it’s both more performant and more ergonomic than an any-based linked list with type assertions at the boundary.
I was using linked lists a lot when was writing code in C. It is interesting that I never used linked lists in Go during the last 12 years. Standard slices work exceptionally well in places where C would require linked lists. As a bonus, slices induce less overhead on Go garbage collector, since they are free from Next and Prev pointer chasing.
IMHO, slices is the best concept in Go. They are fast, they are universal, they allow re-using memory and saving memory allocations via a = a[:0] trick.
I agree. I don’t use linked lists very often, and my point wasn’t that linked lists are a particularly good container; only that in the rare instances when I need to do something generic, I’m only very rarely reaching for any. There’s almost always a simpler solution. Usually it’s the builtin generic types, but even when it’s not, there’s almost always a better solution than any.
In English, “you may have dumb coworkers” does not mean “all of your coworkers are dumb”. Moreover, Go doesn’t have any design principle with respect to “dumb coworkers”, but it does care about keeping the cognitive burden small (you don’t have to be “dumb” to make mistakes or to want to spend your cognitive budget on business rather than language problems). I don’t think that’s uniquely a concern that Go has expressed–every language which exists because C or C++ or JavaScript or etc are too error prone is implicitly expressing concern for cognitive burden. Everyone who argues that static type systems provide guard rails which make it more difficult for people to write certain kinds of bad code is making essentially the same argument.
The cost is that your dumb coworkers can have more ways to write horrible code.
You aren’t taking into account what’s the alternative to generics. People aren’t just going to stand idly and write the same function twelve times, they are going to bolt on macro systems (m4, cpp, fuck even jinja) on top of the source code, implement custom code preprocessors or parsers (pretty much every big C++ project of the 90s), use magic comments, funky build system extensions (you don’t know how far you can get with CMake and some regexes), etc. One way or another the written code is going to be generic… and I tend to think it’s much better if it’s in a language-sanctioned way rather than every project reinventing its own take on it. Today in C++ people do so much more things in-language than in weird external tools compared to 20 years ago, and that’s thanks to the continuous increase in expressive power of its type system.
Alternatively, people may think more and come up with simpler solution, which doesn’t require generics and external code generation. From my experience well-thought interface-based solutions in Go are easier to read and reason about than generic-based solutions.
Update: when talking about interfaces in Go, people frequently think about empty interfaces and forget about non-empty interfaces exposing the minimal set of functions needed for solving the given generic task.
My concern was (and sort of still is) that people would start writing Go libraries with gratuitous abstraction, like they do with other languages that embrace generics. “Just don’t use those libraries” isn’t much of a consolation if that’s the only available library for a given domain nor if the gratuitous abstraction becomes a property of a vast chunk of the ecosystem as with other languages. I’m happy to report that my concern has so far been misplaced–the Go community has done a fantastic job about exercising restraint.
Is that true? IIRC, if so, wouldn’t they have been added a long time ago, with less resistance both from the PL creators and the community? I’ve (unfortunately) only recent had to delve back into Go, after previously using it professionally a few years prior.
Yes. This is true. Every one agrees that generics are beneficial. Luckily, go language designers, besides knowing quite a lot about benefits of generics, are also keenly aware of the costs. The costs are the reason why this isn’t a no-brainier feature to add in 0.1 version of the language.
As my old hardware prof told me, even the most crappy CPU feature have uses! “It is useful” can’t be a reason to add something, you need to compare it to the costs.
I’m glad to see the “used with restraint”. I was worried (and still am concerned) that people were going to write gratuitously abstract libraries like we see in other languages. But so far I’ve been happy with the degree to which people have used them appropriately in the ecosystem.
While these features may simplify writing code for specific domains […] But we don’t need additional mental load when dealing with production code, since we are already busy solving business tasks.
You mean, you’re already busy implementing by hand something the language could have helped you express better.
it becomes harder to debug such code, since you need to jump over dozens of non-trivial abstractions before reaching the business logic
Yes, because it’s far better when you traverse your own hand-made ad-hoc abstractions than standardized ones.
Why adding useless abstractions in the first place? The code must be as simple as possible. Abstractions must be added to the code only when they are really needed. From my experience the need in abstractions in Go is very rare.
Why adding useless abstractions in the first place?
A leading question if I ever saw one. With a premise that X is useless, of course a language is better off without it. But it’s silly to say that all the language features listed in the article and present in many languages but not Go are simply useless. They are all various levels of useful, with pros and cons, and interactions with the language’s other features.
For example, take Go’s simple (ok, err) tuple vs Rust’s Result<T, E> generic enum. Rust has tuples too but they decided to use a higher-end abstraction, with many utility methods and more complicated codegen. But it’s IMHO a very useful abstraction that reduces the mental load compared to the Go solution : no need to worry about API that can return both ok and err non-null, and much better ergonomics, including the beloved ? operator. Sum types (generic or not) are great, once you’ve used them in Rust, you’ll wish most languages had them.
I’m not arguing whether Go should add support for enums, just trying to show that good abstractions decrease the user’s mental load, not the other way around.
But it’s IMHO a very useful abstraction that reduces the mental load compared to the Go solution
And furthermore, it provides additional signalling for free: in Go, most erroring function return either a datum or an error. But not all of them, IO functions notably may often return both.
From time to time a gopher will think it’s a ding against Result that you can’t do that, but it’s actually an advantage: in Go only the documentation tells you the difference, if things are even clearly documented, and it’s easy to be lured into a habit. In rust, if you need something like that you’ll get an odd return type, maybe a tuple of a result and a value, maybe a bespoke enum, maybe a Result<(Int, Err), Err>, etc… and that tells you that something strange is indeed going on.
For example, take Go’s simple (ok, err) tuple vs Rust’s Result<T, E> generic enum.
Even more important, IMO, is the difference in correctness here: Go will allow you to just forget to handle an error. It’s the worst of both worlds: more verbose and tedious to use, and less safe.
The reason is to grow the community. Adoption of Go is hindered by the absence of certain expected abstractions. Your definition of simplicity encompasses understanding what will happen under the hood with every line of code. Python refugees will have a different metric of simplicity, which will look a bit more like “how uniform is my codebase?” There is a huge amount of performance headroom in Go to a Python refugee; things that might cost 10% of the performance are probably not even going to be noticed. If you are writing a simple REST server or client, going from Python to Go is a huge improvement, the only cost is that now you have to do the error handling in this irritating way, there are no generics, inconsistent iteration, etc.
Ultimately, the real question isn’t “why do we need these abstractions” but “what is Go trying to be?” Because if Go is about performance and simplicity primarily, you’re absolutely right and these features are motion in the wrong direction. If Go wants to grow and expand its userbase, it will have to be more inviting to people who are not primarily concerned about performance, and those people will be asking for quality of life improvements like they see in other languages, which to you are going to be unnecessary performance-reducing complexities.
While these features may simplify writing code for specific domains […] But we don’t need additional mental load when dealing with production code, since we are already busy solving business tasks.
You mean, you’re already busy implementing by hand something the language could have helped you express better.
I have some sympathy for their position here. And it’s a similar philosophy to e.g. lua. There’s trade offs in both directions, and neither extreme is perfect.
Every software expands until it finally supports sending email, and every programming language grows until it eventually supports template metaprogramming (or something of equivalent expressive power). It’s unavoidable if the language grows to mainstream levels of adoption. At some point someone will come up and make “Yup”, marketed as “Go but simpler” and the cycle will repeat.
For example, recently Rust started taking over Go share in performance-critical space. I believe this trend can be reverted if the core Go team will focus on hot loops’ optimizations such as loop unrolling and SIMD usage.
At work I would always hear PMs talk about how software engineers are often “too close” to the problem to really understand it effectively, but I don’t think I’ve run across a good example in the wild until right this moment.
Realizing with a night of sleep that the above was quite a bit meaner than I really intended; I do think I actually agree with the thesis of the article. Go is “the simple but serious industry programming language” and I appreciate what it’s doing as an actual experiment in software engineering processes. I personally don’t enjoy writing Go and am more attracted to languages with fun type capabilities like Rust, but I do think the basic thesis of Go as a language is worth preserving without trying to turn it into Rust-flavored sparkling water.
Some software engineers call Go “boring” and “outdated”, since it lacks of advanced features from other programming languages, such as monads, option types, LINQ, borrow checkers, zero-cost abstractions, aspect-oriented programming, inheritance, function and operator overloading, etc.
Haskell programmers saw the language is not Haskell so I never read about someone asking for monads in Go.
Option types. They got asked a lot.
LINQ. Maybe it got asked? I didn’t find it online but I may be following the wrong sources here.
borrow checker. Obviously if you want a borrow checker you won’t ask it from Go since you can get it from Rust.
zero-cost abstraction. Same.
aspect-oriented programming. This one surprised me the most. Who is still using that? And who would ask for that?
inheritance. Maybe it got asked? But at this point most programmers with experience agree it’s a mistake to have it.
function and operator overloading. It got asked sometimes. It always get asked :-)
What surprised me with Go was that it was static but not fully safe. And the excellent quality of a lot of software produced with it.
Also it’s interesting to see that fans of Lisp promoted the language as superior years long*. Lisp is one of the most extensible language but Go, the anti-thesis of Lisp, is was made A LOT of programmer productive. It’s good to challenge its own prejudices.
*: I love the language too but would not qualify it generically as superior. Superior in some context okay, yes.
aspect-oriented programming. This one surprised me the most. Who is still using that? And who would ask for that?
INTERCAL has great support for AOP. I consider the ease with which a feature can be implemented in INTERCAL as a fairly good benchmark of how bad an idea it is.
inheritance. Maybe it got asked? But at this point most programmers with experience agree it’s a mistake to have it.
Go kinda has weak inheritance in that you can embed structs in other structs and get the embedded struct’s methods as top-level methods on the holding struct. A simple example: https://go.dev/play/p/5y1jztBjApj. And I’m sure that a lot of people (probably correctly) would argue that isn’t true inheritance. I’d mostly agree. But it gets you most of the way there, which is basically the Go ethos for language features. I don’t need AbstractBeanFactorys, and if for some reason I did there’s always Java.
The main property of classical inheritance in programming languages is the ability to access base class’ fields and methods from the derived class’ methods, e.g. derived class may change base class behaviour. This is impossible in Go - embedded anonymous structs do not have access to base struct. In other words, embedded structs do not know anything about base struct and cannot modify its’ behaviour in any way.
This eliminates the whole class of issues related to classical OOP, when you cannot say anything about what’s going on in the running code by just reading the code of base class.
The main reason for using Go over other language is that it set up with a specific mindset, that seems to be chipping away, because it’s not hip or fancy.
It’s also a bit sad that your point, which seems to be “things have trade-offs” seems to be largely ignored and it sounds like everything think you hate the idea of iterators or generics in general.
But it’s a bit expected. I hoped Go would not go down that road and offer an exception to that rules that languages will become bloated over time and have very mixed style of software/libraries depending on when it was created, making it harder and harder to read through other people’s code.
It seems like programming languages most of the time end up there, or do a big breaking change, like Python 3, etc.
The typical “solution” is to jump on a new language every couple of years so one has a more minimal language again, and not a new style of programming for every decade or so. Really unsatisfying.
I think it’s also a problem in making decisions. For example take the Go project’s surveys. It’s usually ranking what you miss the most, what’s the greatest burden, etc. But maybe I chose Go, because that’s the trade-off I agree with the most. It feels like people add features to languages until nobody likes it anymore, then everyone jumps to the new language until the same happens. So we end up with dozens, well, hundreds of languages with usually the same bloated feature set.
It’s usually ranking what you miss the most, what’s the greatest burden, etc. But maybe I chose Go, because that’s the trade-off I agree with the most.
If every language gets “bloated” (a word that is criminally overused and I really dislike), then maybe there is a good reason for that? Like, I’m sure you like using libraries that solve a/the problem for you, and they make really good use of a more expressive language.
I hope you don’t mind the longer response too much. I did put things into more words in the hopes of not being misunderstood.
If every language gets “bloated” (a word that is criminally overused and I really dislike), then maybe there is a good reason for that?
Yes, people add features until people abandon the language and switch to something that is still consistent, because it didn’t (yet) have a history of features being added bringing inconsistencies, ten ways of doing the same thing, etc.
Sometimes then the decision is made to break with the old, which is effectively creating a new language (because your code won’t run anymore). See Python 2 -> Python 3.
But it’s not just languages. There is projects who created new major versions removing features intentionally. Even Web Frameworks, like Express and I think Django did that too. It’s not that nobody used those features anyways, but it’s accepting that just having more features isn’t good enough of a reason to have them. And if you are careful about adding them, be it in the language or through libraries you invest into the future. Sometimes that’s not the goal though. Sometimes Ruby + Ruby on Rails is exactly what you want and need. Is it bloated? I think so. They did sometimes over-extend as well. And often they have to put effort into keeping support, for example through configuration options or specifying when a certain class was created and things like that.
So yes, there is reasons. That’s not my point. My point is why do we have languages that turn from being simple to huge just to rewrite them in their simpler version over and over. Yes, there is good new concepts spreading sometimes, and there is taste and stuff, but that’s not the only reason. Going for a new feature is always a trade-off. And if you do that too often the scale at some point reaches a point where people consider it bloated. It’s a different amount for different people and there is surely people that will embrace the new functions and people who don’t mind breaking things by removing the old way of doing things or don’t mind having a million ways to do things. That’s all completely fine. I just don’t want every language going that way and it felt like Go wasn’t for a while. There were talks of core developers of how Go is pretty much done and how little change outside of stdlib and general stuff (newer platforms, GC improvements, etc.) should be expected. That changed. Not sure why, but it did, which means a reason for choosing this language in first place slowly goes away.
Like, I’m sure you like using libraries that solve a/the problem for you, and they make really good use of a more expressive language.
I do my best to avoid adding a lot of extra language. I try to keep at a minimum. I do my best to not have everything being super generic. I try to keep the developer in mind by reducing the language you have to understand. I do that intentionally. I strongly avoid any libraries that add features just for the sake of having them. I look into issues and check if made up use cases for features lead to a rejection.
I don’t have a problem with people thinking differently about languages, programs, etc. That’s why I always argue that people should use languages that fit them, that are made by people with a similar mindset, make similar decisions on trade-offs, etc. And it’s one of the main reasons why there is need for more than one languages. It’s not the only of course, because can also have hard limits/design decisions. But that’s not what this discussion is about.
I completely understand why people choose Ruby, Perl, and very expressive languages. They make it easy to in few “words” express relatively complex things. But that’s not what Go set out to do at all. And if you look into Rob Pike’s and Russel Cox’s history you’ll find that they have a history of writing and using software that was decisively “minimal”, that did not have easy to implement features that would have allowed people to make good use of them.
“bloated” (a word that is criminally overused and I really dislike)
To be fair, yes, it is overused. What I mean with bloated in this context is when things are added not because they fit any project/language/software goals, but because they can be added, maybe because they a trendy, maybe the allow for a cool demo, or maybe because they make something easier. Again, I don’t say this is somehow invalid, but there is projects that intentionally say no to features even when there is a Pull Request, it’s well implemented, there is more than one person pushing for them.
And a lot of projects start out with clear goals that kind of get washed down until they become generic. And that’s then the very same thing that people complain about when it becomes too much. That’s why you have effects when reading through code, and also for example libraries that you can tell when each library was written, because that’s when this hip feature and that kind of writing software was super popular.
If you stick to a more minimal set you might have to write more, but code becomes easier to follow, read and reason about. If there is less features there of course is less stuff that can go wrong. And I think libraries are a good analogy, which is why I try to avoid libraries, especially for simpler things. It depends on how libraries are used though. Every function you add is a function you have to understand in all contexts the function can be used, with all inputs, outputs and edge cases should they exist. That’s a mental overhead, that especially when things go wrong you want to avoid, also to not introduce more problems.
I have to say though, as someone who doesn’t just rant about Perl, without even having really used it. You can also use expressiveness to make things simpler to understand. For example if you have the right words to express logic instead of a minimal set you can reduce mental overhead. However, you essentially need a lot more self discipline to not just use it so it’s easy to write (but hard to read).
So it’s really not defined through language completely. However it can push you into a certain direction. But so can for example a community. See the C code out there. There is lots of it that even experts struggle with and then there is code that looks like a lot of the OpenBSD codebase looks like, that is simple to understand, but often through having limited APIs, especially none that have edge cases, or with which you can shoot yourself in the foot even though that might make things simpler.
I like using standard Go packages. I even like using standard Go packages, which use generics such as slices sync/atomic. They are well-thought, easy to use and hard to misuse. I usually don’t like using third-party generic-based packages, since they are usually overcomplicated. That’s why I’d prefer if generics were limited only for writing standard Go packages.
Take a look at Guy Steele’s excellent presentation titled Growing a Language. I really disagree with this notion that only standard lib developers get to use some features of a language. Many stuff shouldn’t go into the standard lib, but still requires more expressive features.
I love how everything Go does turns me off. How many times have I read something along the lines of “After long discussion, <cool but perhaps advanced feature> was decided to be too complicated for the simple minds that we designed Go for so was left out of its design”
Not to start an argument, but experience had taught me the opposite. A younger me would perhaps be like, “naw, that’s too simple, home something powerful” and I would feel clever when using advanced features and be cool doing it.
After a while though, I figured that I’d rather keep it simple and then be smart when I really need it. Because that way I don’t inflict my wise ass ideas onto the rest of the team, and the poor sucker who’s going to be maintaining my crap for the next 19 years.
Furthermore, experience had also taught me that the more concrete the problem I’m solving, the more this holds. If I’m working in a small product team where my changes are deployed in a few weeks, simple practical stuff is good, and even the junior can fix the code directly in production if needed.
On the other hand, in a big corporate team, I am so far removed from any real problem that I’m inventing my own crap and smart ideas and wise assumptions like there’s no tomorrow.
I mean, it’s not a problem writing smart stuff. It’s even everyone in the team does it, and they’re not the same kind of smart.
So it’s always been about the practicality for me. Maybe that influenced my world view.
After a while though, I figured that I’d rather keep it simple and then be smart when I really need it. Because that way I don’t inflict my wise ass ideas onto the rest of the team, and the poor sucker who’s going to be maintaining my crap for the next 19 years.
I like to compare advanced/complex language features to spice. A plain meal is fine and edible, but perhaps a bit boring; throwing in a bit of spice can enhance the dish; a world-class chef can push the boundary and use more spice in the same dish than someone at home could, but without ruining it; and adding too much spice, whether done by a novice or a chef, makes the food inedible and leaves behind only pain and regret.
Functions, arrays, structs, and loops are the meat and potatoes of programming; generics, macros, async, closures, reflection, dependent types, etc. are the spice. A small sprinkling of those advanced features can improve a code base; very good programmers are able to combine more advanced features without creating a disaster; but if we use too many features, we just create pain for anyone who wants to understand and maintain our code.
The usual actual criticism is that Go is not simple but rather is simplistic. And the fruit of hard-earned experience is the understanding that simplistic is not as practical as it first seems – simplistic programming often boils down to “I got the wrong answer, but I sure did get it quickly and easily!”
That’s not true given my experience with Go. It has very good balance between simplicity and usability. I enjoy writing programs in Go. I enjoy maintaining and extending large codebases in Go. I enjoy the ease of reading and understanding others’ code in Go.
I’m afraid that generics, generators and other “advanced” features will complicate Go too much, so it will become yet another bloated programming language with many ways to write unreadable and unmaintainable code.
Years ago I was reviewing a patch someone had submitted to a project I worked on. One part of the functionality was resetting the sequence objects that yield auto-incrementing primary key values in a DB.
The patch implemented this by hard-coding a “big” (but not actually that big) number and just always setting the sequence to that value.
This is “simple” and even “practical” in the sense that it actually does work for a lot of tables. You could potentially use this for a long time and never run into a problem, and all the while you could sneer at people who insisted that you needed a more complex “bloated” solution.
Every time I look at Go, it reminds me of that patch. There are just so many choices in it that opt for superficial “simplicity” and for sweeping all the complexity and edge cases under the rug. The infamous “I Want Off Mr. Golang’s Wild Ride” gives a few examples of this, but I’ll pile on another: Go’s allegedly “simple” error handling, which ends up being so lacking for real-world use cases that Go error handling is as complicated and fractured as people like to claim packaging in Python is.
Not only does Go’s approach not yield actual simplicity (ask five Go programmers how they handle errors and you’ll get twenty different suggested techniques and libraries), it doesn’t even avoid most of the issues people point out with alternatives like exceptions. For example, every serious approach/library for Go does some sort of wrapping of errors every time they’re encountered, which means that when an error occurs you’re paying a compute and allocation cost in every stack frame between the original error and whatever code stops propagation, just as you would with a try/catch in a language with exceptions. So in the name of “simplicity” Go ends up being more complex than it needed to be – you still pay the costs of exception or exception-like strategies, but without the consistency and clarity of having one obvious way to handle things. It’s a very penny-wise/pound-foolish thing.
I’m sorry, but I didn’t understand the example with the sequence object.
As for error handling in Go, I agree that it may be tedious deciding how to deal with the returned errors - whether to handle them in-place or to return to the caller. If the error is returned to the caller, you need to decide whether to return the error as is or to wrap it into another error with the additional context about conditions and the location from where the error is propagated to the caller. This additional context can help understanding conditions which led to the error, e.g. it simplifies debugging in production.
Other programming languages “solve” error handling complexity in two ways - either via exceptions or via easy to use syntactic sugar, which allows proxying the error from lower functions to the caller.
The main problem of exceptions is almost impossible exception safety. Almost all the code written in the language with exceptions support contains literally hundreds of bugs, which trigger when some rare error occurs. See this article for details.
Programming languages, which simplify proxying the error from lower functions to the caller via syntactic sugar, encourage returning all the errors to the caller without thinking whether this error must be handled right now instead of propagating it to the caller. This may result in not so robust code, which doesn’t handle some errors well, comparing to the code in Go.
So, proper error handling is hard. Some programming languages encourage writing code, which executes without issues in happy path, and breaks with hard-to-debug issues on rare errors. Go forces thinking more about proper error handling. Hopefully, this leads to more robust, easier to debug programs.
I’m sorry, but I didn’t understand the example with the sequence object.
Imagine you have a database table, and the primary key of that table is an integer that should be incremented for each row. So the first row inserted will get a primary-key value of 1, the next will get a value of 2, etc.
A sequence is a database-level object which yields the incrementing values. Databases provide this functionality because they can implement it in a transaction-safe way (i.e., even if two pending insert transactions can’t see each other, the sequence object can ensure they each receive distinct values).
After some types of database modifications/manipulations, you will want or need to “reset” the state of one or more sequences. The right way to do this is to issue a query to find the highest in-use primary-key value in the table, then set the sequence’s state to yield a value higher than that. The wrong way, which is what that old patch did, is to say “eh, 10000 is probably high enough that nothing’s using a higher value, we’ll set the sequence to that”. If the table in question already had more than 10000 rows inserted, this will make the sequence re-issue previously-used primary-key values, which will cause integrity errors when trying to insert new rows.
The main problem of exceptions is almost impossible exception safety. Almost all the code written in the language with exceptions support contains literally hundreds of bugs, which trigger when some rare error occurs. See this article for details.
The examples given in that article are not avoided by Go’s error handling. In Go it is equally possible to have an error occur in the middle of a sequence of operations, and to have it occur in a way which produces an invalid partially-completed state for that sequence of operations. Uninitialized fields, for example, are kind of an infamous gotcha in Go, and Go’s approach to errors makes it easy to end up with them. The one advantage exceptions have is that they immediately break the control flow and propagate themselves until caught – Go’s errors do not do this, so you can easily and dangerously keep going after accidentally failing to notice a non-nil err value, resulting in inconsistent or incorrect states that are hard to debug and diagnose.
Which gets back to my point: the supposed simplicity from Go’s approach does not materialize. Instead it is simplistic, and actually ends up introducing more complication than the supposedly “complex” and “bloated” alternatives would have.
Thanks for the description of the issue with sequence object!
As for the error handling in Go, it is trivial to notice and fix unhandled error or improperly handled error by just reading Go code. This is almost impossible to do when reading code written in programming language with exceptions. See another article, which explains this in more details.
Uninitialized fields, for example, are kind of an infamous gotcha in Go
That’s not true - Go always initializes all the fields and variables to zero values, contrary to C or C++. See this playground example.
Adding to what ~ubernostrum said, if variables are default-initialized when there isn’t an explicit initializer, then the compiler cannot warn you that you forgot to initialize it.
In my experience (in C) it’s almost always the case that I either have an explicit initializer for a variable, or I have some complicated control flow following the declaration to work out what its value should be. In the complicated case, it’s really helpful if the compiler can tell me when I missed a branch.
An alternative solution (like Rust) is to always require an explicit initializer, and allow expressions to contain complicated control flow. This is probably better than Golang or C – I find I have a stronger dislike to sprawling initializer expressions in Rust than to divergent control flow in C, and that dislike helps me to keep things simple.
Automatic initialization of variables and struct fields to zero values provides the following good properties in Go:
it eliminates bugs related to missing initialization (like in C).
It reduces the amounts of code needed for the initialization to zero values (variables and struct fields need to be initialized to zero most of the time).
It allows using zero field values as default values in large config structs, so users of these structs need to fill only a small fraction of needed non-default fields.
it eliminates bugs related to missing initialization (like in C).
As mentioned by others, it only makes it deterministic/memory safe, it doesn’t make it a valid representation of state. Say, you have a date, all of its fields are 0 - is that a valid date? Probably not, depending on how you encode it.
As for the error handling in Go, it is trivial to notice and fix unhandled error or improperly handled error by just reading Go code. This is almost impossible to do when reading code written in programming language with exceptions. See another article, which explains this in more details.
I think you should read the articles you keep linking. They are mostly concerned with “what if an error happens in the middle of these operations”, which is a real problem but is not a problem unique to exceptions and not a problem that Go magically fixes for you. It is super-duper easy to write bad wrong Go code that’s full of bugs related to partially-completed operations!
And your latest link even admits, as a cop-out at the end:
Yes, there are programming models like RAII and transactions, but rarely do you see sample code that uses either.
That was written nearly 20 years ago. There are programming languages which have RAII as a syntactic construct now. People know about transactions now. Tutorials and reference documentation use these features now.
If you get to cite a 20-year-old article on how not enough sample code uses these things, I get to say Go is impossible to use because 20 years ago it didn’t exist yet.
That’s not true - Go always initializes all the fields and variables to zero values
When I say “uninitialized” in the context of Go, do not interpret that as “this struct will contain pointers to uninitialized memory”. Interpret it as “this struct will contain fields set to whatever Go’s defined zero-value is for those field types”, which is a bit of a mouthful to say over and over again.
And this is a real problem and a real source of gotchas in Go, because in any error handling model based on disrupting control flow (whether by throwing an exception in other languages, or by an early return with a non-nil error in Go) it is entirely possible to wind up with a struct in the wrong state and difficult to tell.
Suppose there’s an “open bank account” function which creates a new BankAccount struct and optionally applies a deposit to it. I am looking up a newly-created account, and its balance is zero. How can I tell, from looking up that account, whether its balance is zero because the customer did not make a deposit, or because the deposit processing failed partway through and left the BankAccount with only its default Go-zero-value for the balance?
This is the downside of having the struct always be “valid” – you gain the ability to persist or pass around things that you shouldn’t be able to persist or pass around, and get your system into an inconsistent or even broken state.
And don’t tell me the answer is to run everything in a transaction and roll it back – the article you linked does not allow transactions as an answer!
The main problem of exceptions is almost impossible exception safety. Almost all the code written in the language with exceptions support contains literally hundreds of bugs, which trigger when some rare error occurs
Come on, this is just dishonest, not even you believe that. Surely go handles error cases better with the million randomly placed empty if err, or printing a random word for an error, not even getting a stacktrace, having to grep for the error code in the source code..
Exceptions are great, because they make it harder to just silently swallow errors (the worst outcome of it all), unlike go’s C-inherited errno does.
After a while though, I figured that I’d rather keep it simple and then be smart when I really need it
That’s a correct approach - but you can’t achieve that with Go, that’s the problem. Writing a library - as opposed to a concrete program - simply requires more expressive code and Go under-delivers here.
That’s not true from my experience. I wrote numerous open-source Go packages. Some of them are relatively popular (see here and there ). But I never felt the need to use generics in these packages.
I literally just went through one where I had to retract much of my “cleverness” due to (valid, after thinking about it) pushback, in most cases.
I’m in Elixir btw, which seems to occupy this magical space of both “is accepted as a language people get actual work done in” and “is a functional language”.
Go was designed for people who aren’t yet programmers (writing Python at university does not make you a Python programmer). The oft-shared Pike quote:
The key point here is our programmers are Googlers, they’re not researchers. They’re typically, fairly young, fresh out of school, probably learned Java, maybe learned C or C++, probably learned Python. They’re not capable of understanding a brilliant language but we want to use them to build good software. So, the language that we give them has to be easy for them to understand and easy to adopt.
I mean that still makes sense in that it was for both new grad Google employees as well as existing Google employees (who’d have to stomach a ‘simpler’ language). Given that Google was always heavily weighted towards Python+Java.
I write performance-oriented software in Go. It works fast, sometimes even faster than the corresponding software in Rust (see this case ). Go allows writing fast yet simple code. The only thing, which bothers me is that it doesn’t optimize hot loops in any way. It doesn’t unroll them and it doesn’t vectorize them. So the only solution to get fast performance out of hot loops is to write them in optimized assembly. But this has two drawbacks:
It isn’t trivial to write code in assembly
You need to write different assembly code for different CPU architectures and platforms.
It would be great if Go compiler could do this work on itself.
I don’t believe writing asm in golang is especially difficult compared to other asm, it’s mostly the writing asm part which is a big step in complexity corresponding to not doing so.
With the caveat that Go uses a bespoke pseudo-ASM, which is separate from the architecture’s own standard assembly but does not entirely abstract it.
Assuming Rust actually is taking some of Go’s project share (source ?), I’d say it’s more about Rust becoming easier to use (language features, ecosystem…) and more mainstream than in Go/Rust becoming faster.
Rust and simplicity in the same sentence look unreal :) Could you provide a few Rust vs Go code snippets, which show the advantage of Rust code over Go code regarding simplicity PoV?
As kivikakk said, “easier to use” is not the same as “simpler”, just like using a complex washing machine is easier than washing by hand.
In the programming world, a useful way to look at things is that each task+constraints has an inherent amount of complexity, and that the only choice you have is where that complexity should live. It gets pushed from the language to the stdlib to the 3rd party libs to the top-level code to the end-user. If your language handles more of the complexity, your code doesn’t have to. Exposing “only as much complexity as needed” is hard, there is no best answer. Go and Rust have taken very different routes, but they’re both great languages.
Another misunderstanding is that I said Rust is getting simpler to use than previous Rust versions, not necessarily simpler than Go. The language and stdlib get QoL improvements, the borrow checker gets smarter, the surprising limitations get lifted, the tools get refined, the crates get more numerous and high-quality, etc. Rust has a reputation for being hard, but it’s not as hard as it used to be. The “Prefer Go because Rust is too hard” argument is still strong, but getting weaker.
As for Rust code actually being simpler than Go code, there are some small examples like let res = failible()? vs res,err := failible(); if err != nil { return (nil, err) } or *counter.lock().unwrap() += 1 vs counter_mu.Lock(); counter += 1; counter_mu.Unlock() (both simpler to use and harder to misuse), but toy examples like these are arguably not the most interesting comparison.
I don’t believe the latter is true in Go. The Go implementation of generics uses dynamic dispatch for everything. They could add some monomorphisation, but you can also monomorphise functions that take interfaces and they haven’t done that either.
I do find generics useful, though it would be nice if they could be used in methods as well.
I’m not especially excited about the new iterations thing though. Seems complex and funky to me. I can only hope it will end up being better in practice than I image it will be.
Both generics and iterators are useful for some applications. But they have costs related to increased complexity of both the programming language itself and the code, which uses these features in inappropriate places.
Frequently it is better to stop adding new features and instead focusing on polishing existing strong features.
I think your take is a breath of fresh air. I’m also rather annoyed that languages keep bolting on features from every other language out there, even if it doesn’t fit the language (although personally, I lean more towards powerful languages, used with restraint, but that requires a very disciplined dev team).
It sounds like iterators could’ve been a great fit with Go if designed in from the start, perhaps in a more explicit way so it’s clear where the function calls are happening. It would’ve reduced the inconsistent mess of different standard iterator functions they describe, and there wouldn’t be two ways of doing things.
As someone who was firmly in the “I will not touch a language without generics” camp, I must admit that I kinda like the current state of generics : the warts push towards not using them if possible, but they are here if needed.
… and people continue asking for missing parts of Go generics. BTW, iterators is a good example of one of such parts - they allow iterating over generic types in a unified way. I bet the next generic-related thing, which will be added in Go, is generic functions at generic types. The next one is the ability to specialize generic types and generic functions. The next one is function overloading. And so on. Just look at the history of C++.
I’m curious why Go didn’t go with external iterators? Type satisfying a Next() interface can be passed to range. This is what I think I wanted but I’m sure there’s a reason why it isn’t like this.
Internal iteration works better with defer, while with external iteration you’d have to instantiate the iterator, defer a cleanup function, then actually do the iteration.
Go does not currently have language-level interfaces or magic methods.
Go does not support genericity over return arity, which you’d need to handle both faillible and infaillible iterators.
External iteration would make range’s desugaring more complex as it would need to perform introspection of user defined method sets, rather than direct dispatch on builtin types (this overlaps with issue 1).
Internal iteration, while less flexible, is also much easier to optimise especially when not doing much optimisation at all, you inline functions and end up with basically merged bodies which you can optimise relatively easily.
In fact, while Rust originally used internal iterators and switched to external for a multitude of reasons back in 2013 it later reintroduced opt-in internal iterators via try_fold specifically for the purpose of optimisation: https://medium.com/@veedrac/rust-is-slow-and-i-am-the-cure-32facc0fdcb
I think one area where the proposal and discussion around it did themselves some damage is that “parameterized” iterators are all implemented using a closure pattern. But in reality you could just as easily implement an ancillary iterator object storing the parameters which would then expose an iterator method. It’s a bit longer (as you need to create the bearing struct) but it avoids nesting and provides for more local reasoning.
I appreciate the gigantic amounts of research and work Russ Cox did over the last two years for adding iterators into Go. But it would be much better for the Go ecosystem if he spent these efforts on optimizing hot loops in Go instead. Simply admit adding generics and iterators in Go was a mistake and start working on optimizing hot loops.
Thanks to Russ Cox, Go has sane module management system (aka go mod), which works great most of the time. Thanks to Russ, we have RE2 regular expressions based on deterministic finite automate (DFA). (BTW, it would be great optimizing regexp package in Go, since it is quite slow comparing to competitors).
But it would be much better for the Go ecosystem if he spent these efforts on optimizing hot loops in Go instead. Simply admit iterators in Go was a mistake
That’s literally just like your opinion.
Thanks to Russ, we have RE2 regular expressions based on deterministic finite automate (DFA). (BTW, it would be great optimizing regexp package in Go, since it is quite slow comparing to competitors).
The latter is a natural consequence of the former. The draw of DFA is that they have very deterministic performances (unless you explode the DFA size).
and start working on optimizing hot loops.
The cool thing about that is… it probably does not require complicate langage-level changes and buy-in, so you could work on that.
All your considerations seem to be rooted in the idea of Go being a high-performance langage, but it’s never been that. It’s always been a “probably fast enough” langage. And it’s notably always put compilation speed first, which complex optimisations like auto-vectorisation definitely go against.
The latter is a natural consequence of the former. The draw of DFA is that they have very deterministic performances (unless you explode the DFA size).
DFA must be faster than NFA all the time. The issue is that regexp package in Go is written in unoptimized way. This is OK for the initial implementation added to standard library before the first Go release. But this isn’t OK now, since regexp package is actively used by Go applications.
You’re right, the difference I meant to make is FSM versus backtracking.
The latter is subject to more edge case failures, notably exponential backtracking, but there are also relatively common cases where a backtracking engine will generally beat an FSM.
In my experience, FSM engines also suffer greatly as soon as captures are involved, much more so than backtracking engines, this I observed repeatedly on both re2 and rust’s regex.
With the backwards compatibility guarantee in mind, there are only two options: development stops, or features are added. Therefore, there is no right direction to evolve in, according to the author.
I was skeptical of generics, but then I did a project where they saved me a ton of code and simplified things tremendously. I am skeptical of the iterators too… We’ll see.
I was working with a big and complex graph model that contained over a hundred types of entities. For reasons that are difficult to explain here, I wanted to implement them all as different Go types and have the option to instantiate a full set of CRUD endpoints for a type with a single line of code. This could likely be achieved with interfaces alone, but now I can do it simple like this:
So the NewEntityAPI function instantiates the Thing in some trivial way such as var x Thing; return &x . While this saves from a single line of boilerplate code per each NewEntityAPI call, this doesn’t look like significant improvement worth adding generics to Go.
If the Thing implements some interface such as http.Handler, then pure interface-based code would be simpler to work with than the generic-based code.
If you have hundreds of different types with different functionality, this means you already have a lot of custom non-generic Go code, which implements all this functionality. Saving a line of code per each NewEntityAPI call with generics doesn’t look like an improvement in this case.
So the NewEntityAPI function instantiates the Thing in some trivial way such as var x Thing; return &x .
Not really. I just checked, Entity API is a type that combined with its methods runs a little over 350 lines.
When reviewing code, it is better to see the actual code and understand its goals. I cannot share that here, but if you are really interested, I am happy to set up a call and walk you through it.
With the backwards compatibility guarantee in mind, there are only two options: development stops, or features are added. Therefore, there is no right direction to evolve in, according to the author
The right direction is to improve existing strong sides of Go:
Simplify and optimize go tooling
Teach Go compiler generating higher-performance binary code
Reduce the generated binary sizes
Add simple quality-of-life features, which don’t complicate Go language specification, have no non-trivial implicit side effects and don’t increase the complexity of the Go code, which uses these features. For example, this one.
Another area that would be worth exploring would be the “deprecation of stuff”.
There is already a way to mark some code as deprecated, but I think the tooling around could be improved (especially if a replacement is available). https://github.com/golang/go/issues/50847
Having a way to ensure that new developments does not use deprecated stuff would be very valuable, I think.
I thought we were taking mainly about the language specification and the standard library. Even if the new feature is much simpler, adding it and keeping support for the old way increases complexity. As explained with the iterator example:
Again, this sounds legit — to have a unified way to iterate over various types in Go. But what about backwards compatibility, one of the main strenghs of Go? All the existing custom iterators from the standard library mentioned above will remain in the standard library forever according to Go compatibility rules. So, all the new Go releases will provide at least two different ways for iterating over various types in the standard library — the old one and the new one. This increases Go programming complexity,
I don’t think that holds much water. Previously if you used something which should be iterable you’d go to the docs thinking “now does this thing implement some form of iteration and if so how”, hunting for something looking vaguely like what you want with no idea as to its shape. With rangefuncs, your first thought should just be to look for that, and I guess slices as a fallback. And to a reader, iteration looks uniform instead of needing to recognise the iteration pattern for that specific API.
The argument against improving iterators because they’ll have to carry the older ones forever is uncomfortable for me. It seems to reduce to “get it right first time”. Perhaps they need a stronger way of signalling “never use this function” in the API and documentation?
Even if you use the iterator, which cannot return errors, the resulting for … range loop looks less clear than the old approach with the explicit callback. Which code is easier to understand and debug?
tree.walk(func(k, v string) {
println(k, v)
})
for k, v := range tree.walk {
println(k, v)
}
What’s different is that the for loop version supports continue, break, goto, defer, return, while the first version doesn’t. You even mentioned that in the post. You provide an example utilizing none of these features, which would’ve shown complexity of just using closures.
Now try debugging both versions of the code with control flow statements. For example, if the for ... range loop contains return something statement, this statement is implicitly converted to some non-trivial return from implicitly created anonymous function, which implicitly passes the mustStop flag together with the returned values to the implicitly created loop body, which returns the actual values to the caller of the outer function containing the for ... range loop. Sounds easy to track and debug, isn’t it? :)
I am curious, do you have the same debugability issues with defer? I think it used to be implemented in a similar way, by wrapping code into a closure which is pushed onto the runtime stack of closures to be executed on function exit?
That is, the level of semantic and implementation indirectness seems pretty similar between defer and the new for.
That outer func is not something you write in the source code, but rather something that the compiler adds for you. This feels similar in shape to the transformation that for does.
I think it’s okay for languages to evolve features that you don’t need or won’t use. And I think we should allow for that. In the past, PLs were seen as toolboxes for problems, where the PL features are included so that use of that feature would be reasonably standardized, warts and all. Every codebase of sufficient size ends up developing a mini-dialect of its own. It’s part of the in-essential complexity of programming in the large. No PL can absorb all of it well. Even C, as bare as it is, suffers from this, mostly in the form of macro abuse. Lisp-like languages are just 100% honest about it and grant maximum power.
Because of that, I’m sympathetic to the slow accretion of features in even minimal PLs, drawing from common idioms and pain points.
After the initial success of the design, the language is now being used in situations not envisioned by its creators. This happens when you have a large and heterogeneous community, a consequence of success. I think the average Go developer is discovering they need OCaml, but their brains and their companies are not prepared for such transition; therefore is better to push Go in the OCaml direction.
It is better switching to OCaml now instead of demanding some esoteric feature in Go, then waiting for 5 years until it will be implemented in partial hard-to-use form, and then waiting for another 10 years until this feature will become useful in Go (and Go will become more complex than C++ by this time).
it becomes harder to understand what’s going on by just reading the code;
it becomes harder to debug such code, since you need to jump over dozens of non-trivial abstractions before reaching the business logic
Giving developers the tools to opt into and use features and abstractions isn’t a bad thing. If you end up creating a mess of a codebase because of additional language features that’s your problem.
We’re talking about table-stakes features for every modern programming language here.
Using C++ is a great argument against scope creep (languages exist on a complexity spectrum). But IMO this isn’t really a strong argument adopting something like generics, this attitude just strikes me as fear of complexity to the point of hindering the language.
Containers are practical in C++ and Java. Containers aren’t needed in Go most of the time thanks to slices and maps. These dead simple built-in data structures cover the majority of cases where you’d use some non-trivial data structures in C++ or Java.
I write non-CRUD Go code for 12 years (see my profile at GitHub )
and almost never used generic data structures in Go other than built-in slices and maps. The only exception is container/heap, which is very useful for implementing n-way merge.
Of course, I create various custom data structures and containers in my Go programs. But these data structures and containers aren’t generic - they solve the given task in the most simple and efficient way without unnecessary abstractions and bloat, which is usually needed in generic solutions.
My reservations about Go aside, it’s a good point. For a language whose mission is to oversimplify, these examples of evolution seem to go against that mission.
It would be cool if Go decided to change its mission. But to do that really well, you’d have to break backward compatibility…
Weird take. Obviously you could do everything in untyped lambda calculus, but that sucks. Likewise generics significantly reduce the need to use the any type, eliminating most remaining untype checked code. Adoption is low because it’s used with restraint.
There’s absolutely zero doubt that generics are useful. Literally everyone agrees with that!
What is also true is that generics have costs, across many dimensions. I think it’s also true that literally everyone agrees that generics are non-free.
What is curious though, is that it seems that there’s a sizable population who don’t acknowledge that cost. Which looks like:
Person A: generics are useful for X but they have costs C.
Person B: no, generics are useful for Y.
Another observation here is that the overall type system complexity is a spectrum. And this tradeoff-vs-zero cost dialog tends to repeat at any point on this spectrum during the language evolution.
EDIT: the canonical example of this effect couple of notches up the expressiveness scale: https://github.com/fsharp/fslang-suggestions/issues/243#issuecomment-916079347
I have very mixed feelings about this.
Taking Rust as an example, I think the complexity of the trait system (type classes with type projections, including type function projections) is necessary complexity to make Rust’s attempt at safe, “zero-cost” concurrency viable. On the other hand, I can’t deny that some of the way I see the trait system used in the ecosystem (and even, unfortunately, in certain recent, dubious, as-yet-unstable additions the standard library) is awful library design and obviously the sophomoric obsession with “expressive type level programming” that Don Syme alludes to. I try to avoid depending on crates like this as much as I can, and my own code rarely ever involves defining new traits.
But how can we tame this jungle of expressiveness without losing the ability to prevent classes of bugs via automated, formal program analysis? One option perhaps is to have a clear distinction between the “application language” and the “systems language,” allowing you to omit features in each and create a simpler language in each category, whose features together combine via sum (application language + systems language) instead of product (application features * systems features).
Baring that, I would say that there is always the informal technique of culture development so as to avoid the use of terrible type programming. In other words, I want a language which can expression typenum, with a community of practitioners who are wise enough never to actually use typenum. I don’t see why that should be beyond us.
My gut feeling is that there’s still room for improvement language design wise, that we are still pretty far from expressiveness-complexity Pareto frontier.
That’s very unsubstantiated feeling, but two specific observations make me feel that way:
One possible mitigation is culture and idiom. Maintainers have a lot of soft power here. They write the reference documentation that people read when onboarding to the language. Some slogan about “concrete is better than abstract” could do a lot of work (it would probably be invoked legalistically in many cases, but that seems like the lesser evil to me). I think culture does a lot of work to keep Go code simple even since generics have been released.
I can somewhat easily envision someone writing Rust closer to how they would write C. I could even see the Linux project implementing guidelines for Rust that aim to keep it simple, and maybe other projects adopt those guidelines. Maybe they get enough traction to influence Rust culture more broadly?
We could probably do a better job with the language we use to describe some features. For example, calling a feature advanced implies that it’s users are not beginners. If you instead call a feature niche, you imply that (1) it is not meant to be commonly used and (2) people do not need to understand it to be productive.
I’m confused about the distinction between “everyone agrees that generics are non-free” and “there’s a sizable population who don’t acknowledge that cost”. If there are people who don’t acknowledge the cost, doesn’t that imply disagreement that generics are non-free?
Aside from that confusion, I agree. In many of the debates I’ve witnessed about Go and generics over the years, the rabidly anti-Go people (who have always been extremely numerous, i.e., I’m not exactly nut-picking) have typically refused to concede any downside to generics whatsoever. Similarly, they’ve refused to acknowledge any utility in standardization (“most Go code is written in a very similar way”). And in fairness, I didn’t really think much of the standardization property before I used Go in earnest either, but then again I didn’t go around arguing decisively against it as though I had some kind of expertise with or perfect knowledge about it.
By contrast, within the Go community there has usually been a pretty robust debate about costs and benefits. Marginally few people would argue that generics confer no benefit whatsoever, for example.
I might be using wrong words here, so let me make this more specific. Let’s take a simple statement “generics make compiler harder to implement”. I can see people expressing the following positions:
From my observation of such discussions, I think position 4 is puzzlingly prevalent. Well, maybe I am misunderstanding thing and that’s actually position 1, but I find that unlikely. But also that’s squishi human theory-of-mind stuff, so I might be very wrong here.
I think part of the disconnect here is that apparently you’re very concerned about complexity in the compiler. I didn’t realise that was your concern until this comment.
Obviously a slow, buggy compiler is bad, but I generally view the role of the compiler as eating complexity off the plate of the programmer; instead the costs I’m thinking about are those that fall on the users of compilers.
No, I am not very concerned about implementation complexity. I use this as the first example because:
As I’ve written in my other comment(and, before that, at https://matklad.github.io/2021/02/24/another-generic-dilemma.html), my central concern is the ecosystem cost in the form of more-complex-than-it-needs-to-be patterns and libraries. And, to repeat, Go is an interesting case here because it is uniquely positioned to push back exactly against this kind of pressure.
I guess I don’t see the costs. You can still use untyped or first-order-typed code. Explain the costs like I’m five?
Notably go generics don’t allow for compile time computation unlike ml-derivatives.
One cost to the “generic” generics I am very well aware of. I was at JetBrains when support for Rust and Go was added to the platform, what was later to become GoLand and RustRover.
For Go, the story was that the language semantics was just done at one point, and devs continued to hack on actual IDE features. For Rust, well, we are still trying to cover the entire language! I don’t think either rust-analyzer or RustRover support all type system features at the moment?
Now, current Go generics are much simpler than what Rust had circa 2015, so the situation is by far not that bad. But this is still a big increase in implementation complexity relative to what’s been there before.
What I think is the biggest cost is the effect on the ecosystem. You can’t actually stick to first-order code, because the libraries you use won’t! There’s definitely a tendency to push ecosystem towards complexity, which plays out in Rust for example. See, e.g., boats comment. But here I can confidently say you don’t need to go all the way up to Rust to get you into trouble — I’ve definitely lost some time to unnecessarily generic Java 7. I am very curious how this plays out in Go though! Go has very strong focus on simplicity, so perhaps they’ll reign this social tendency in?
Another cost of generics is compile time or runtime. Generics are either slow to compile, or slow to run. This is also often a cost imposed on you by your dependencies.
Finally, there’s this whole expressively treadmill: languages start with simple generics, but then the supports tends to grow until past the breaking point. Two prominent examples here are Java type system becoming unsound without anyone noticing, and Swift type system growing until it could encode undecidable word-equivalence in a semi-group problem.
Per your other comment, I’m in camp 3: it’s a one time cost, and unless it takes literal years to pay it, we can mostly discount it: the compiler team is tiny compared to the size of the whole community, so the only relevant cost I see here is opportunity cost: what else could the compiler team do instead of implementing generics, that would make life even better for everyone else?
I believe Ocaml disproved that a number of decades ago.
Its solution required a small compromise (to avoid heap allocating integers, their range is cut in half), but the result is dead simple: generic code can copy integers/pointers around, and that’s about it. Compile once, run on any type. And when the GC comes in, discriminating between integers and pointers is easy: integers are odd, pointers are even. The result is fast to compile and run, as well as fairly simple to implement. And if we needs natively sized integers, we can still heap allocate them (the standard library has explicit support for such).
Oh but you’d have to think of that up front, and it’s pretty clear to me the original Go devs didn’t. I mean I have implemented a statically typed scripting language myself, and I can tell from experience: generics aren’t that hard to implement. The Go team would have known that if they tried.
This slippery slope argument also applies to languages that don’t have generics to begin with. See Go itself for instance. The problem doesn’t come from generics, it comes from an unclear or expanding scope. If you want to avoid such bloat, you need to address a specific niche, make sure you address it well, and then promote your language for that niche only.
I’m pretty sure Go acquired generics for one of two reasons: either it wasn’t addressing the niche it was supposed to address well enough, and the compiler team had to say “oops” and add them; or people started using it outside of its intended niche, and the compiler team felt compelled to support those new use cases.
Not really, Ocaml uses standard solution in the “slow code” corner of design space: everything is a pointer (with an important carve out for ints, floats, and other primitive types).
This is the case where generics make even non-generic code slower.
That’s fair, but I still have a trick up my sleeve: there’s an intermediate solution between full monomorphisation and dispatch/everything-is-a-pointer. In C++ for instance, most template instantiations differ by one thing and one thing alone:
sizeof()
.Which narrows the choice quite a bit. Now we’re choosing between compiling once per type size, and passing an additional parameter to the generic functions (the type size). Sure you won’t get the fancy stuff like copy constructors, but the compilation & runtime costs involved here are much, much tamer than the classic solutions.
Sounds like golang’s gcshape monomorphization mentioned by ~mxey?
It actually does, thanks for the link. And from the look of it, it does sound like it’s more complicated than just
sizeof()
, which I would expect if you want to support stuff like non-shallow generic copies. I’ll look it up.I’m very suspicious of slippery slope arguments.
I don’t think that’s true. That’s true of much more complex higher order type systems than go’s generics.
Eh unsound Java still rejects more incorrect programs and object-riddled Java.
I think it is true of any possible implementation of generics. The two implementation choices are dynamic dispatch or monomorphisation. Dynamic dispatch includes a run-time cost, which may be small but also adds a bit more in terms of impeding inlining. Monomorphisation incurs a compile-time cost because you have to create multiple copies of the function.
Slower at compile time, but not necessarily slow to compile.
Can you expand on that? I’m guessing that you mean that a program expressed without generics but solving the same problem will require more parsing and may end up being slower to compile?
Sorry, I just meant that even though monomorphisation takes non-zero time it doesn’t follow that overall compilation will be even perceptibly slower. Not free but maybe still cheap.
That and maybe some optimizations (during compile) can be done when the compiler is smart about generics rather than not knowing that similar things are actually the same.
Without generics, you also have to create multiple copies of the function.
The problem with C++ / Rust style generics is that monomorphized functions are generated at the use site, where the concrete types are known. They create lots of duplicate copies of monomorphized functions that must subsequently be deduplicated.
This is more of a problem for C++ than Rust, because C++ does this per compilation unit and so the linker typically throws away around 90% of the generated object code in the final link step. Rust doesn’t have the same separate compilation model, but still suffers from some of the same cases where the generated functions are identical and you need to merge them.
Or you do dynamic dispatch and have one copy of the function that can do the same work, just without any specialisation.
The tricky thing for a compiler (and I don’t know of any that are good at this) is working out when monomorphisation is a good idea. Both Rust and C# (and some ML dialects) have attributes that let you control whether something should be monomorphised. Go uses an approach where they monomorphise based on the size of types but do dynamic dispatch for method calls.
If I recall correctly OCaml does neither monomorphisation nor dynamic dispatch. With the possible exception of special cases like structural equality, generic code simply treat generic data as opaque pointers, and as such runs exactly as fast as a monomorphic version would have.
The GC does some dynamic dispatch to distinguish integers from pointers, and integer arithmetic does do some gymnastic to ignore the least significant bit (set to 1 to distinguish them from pointers), so the cost is not nil. As high as actual dynamic dispatch though? I have my doubts.
Go isn’t a particularly smart compiler, which makes benchmarking the overhead of the function call relatively straightforward:
https://gist.github.com/matklad/d7e36f031a38a9a7b3b8a378486a6a91
There’s just more indirection with generics, relative to interface passing. And the alternative to generics is not always an interface — often it is “this code doesn’t actually need to be generic”. And at that point, the difference might be stark: either you don’t monomorphise, in which case the non-generic code, by virtue of inlining, becomes massively faster, or you monomorphise, and now your thing compiles many times slower.
You’re talking about a language that added generics after the fact. This adds constraints they likely wouldn’t have had if they did it from the start, and is much more likely to bias heavily against generics. Try the same with OCaml, I bet the results would be very different.
Now benchmark against using the any type, which is what you’d use in the absence of generics. I suspect that’s not any faster than generics.
If you don’t have generics, most of the time, rather than using
any
, you refactor the code to not be generic. In any case, any is not slow:This makes sense — it’s still static dispatch after the downcast, and the downcast should be speculated right through.
You do have to be careful about inlining and devirualization. In this case, the interface version is being inlined and devirtualized and so it ends up doing the exact same code as the direct version. Adding
//go:noinline
annotations to the four functions changes the results from (on my PC on Go 1.22.4)to
which matches what I expected: generics are implemented using dictionary passing, so it’s a virtual dispatch call just like interfaces, and the any version is going to do slightly more work than the increment checking the type every loop.
People like to claim that Go isn’t a particularly smart compiler, but I find that it does do quite a bit of useful optimizations and instead chooses to skip out on those that are less bang for the buck. For example, in this case, it actually executes every loop and increments instead of just loading a constant like many C compilers would to (IMO) dubious benefit.
Thanks, I take my words back that it’s easy to benchmark Go code, I didn’t realize that the compiler already does devirtualization!
It only devirtualizes in extremely limited circumstances. Anything more complex than a micro benchmark and it won’t devirtualize 🙃
Your commitment to educating others with your unique experiences is incredible. Thank you for these replies.
<3 that’s an extremely lovely characterization of my procrastination!
Well I stand corrected on the performance - I’m actually quite surprised that go generics are being expanded to something that performs worse.
Typically the
any
type isn’t what we’d use in the absence of generics. I’ve written a lot of Go since 2012 and only a vanishingly small percentage of it usesany
for generic use cases. Far more frequently, I just writetype FooLinkedList struct { Foo Foo; Next *FooLinkedList }
because it’s both more performant and more ergonomic than anany
-based linked list with type assertions at the boundary.I was using linked lists a lot when was writing code in C. It is interesting that I never used linked lists in Go during the last 12 years. Standard slices work exceptionally well in places where C would require linked lists. As a bonus, slices induce less overhead on Go garbage collector, since they are free from
Next
andPrev
pointer chasing.IMHO, slices is the best concept in Go. They are fast, they are universal, they allow re-using memory and saving memory allocations via
a = a[:0]
trick.I agree. I don’t use linked lists very often, and my point wasn’t that linked lists are a particularly good container; only that in the rare instances when I need to do something generic, I’m only very rarely reaching for
any
. There’s almost always a simpler solution. Usually it’s the builtin generic types, but even when it’s not, there’s almost always a better solution thanany
.The cost is that your dumb coworkers can have more ways to write horrible code.
Go is the only language I know of that takes “your coworkers are dumb” as a design principle
In English, “you may have dumb coworkers” does not mean “all of your coworkers are dumb”. Moreover, Go doesn’t have any design principle with respect to “dumb coworkers”, but it does care about keeping the cognitive burden small (you don’t have to be “dumb” to make mistakes or to want to spend your cognitive budget on business rather than language problems). I don’t think that’s uniquely a concern that Go has expressed–every language which exists because C or C++ or JavaScript or etc are too error prone is implicitly expressing concern for cognitive burden. Everyone who argues that static type systems provide guard rails which make it more difficult for people to write certain kinds of bad code is making essentially the same argument.
You aren’t taking into account what’s the alternative to generics. People aren’t just going to stand idly and write the same function twelve times, they are going to bolt on macro systems (m4, cpp, fuck even jinja) on top of the source code, implement custom code preprocessors or parsers (pretty much every big C++ project of the 90s), use magic comments, funky build system extensions (you don’t know how far you can get with CMake and some regexes), etc. One way or another the written code is going to be generic… and I tend to think it’s much better if it’s in a language-sanctioned way rather than every project reinventing its own take on it. Today in C++ people do so much more things in-language than in weird external tools compared to 20 years ago, and that’s thanks to the continuous increase in expressive power of its type system.
Alternatively, people may think more and come up with simpler solution, which doesn’t require generics and external code generation. From my experience well-thought interface-based solutions in Go are easier to read and reason about than generic-based solutions.
Update: when talking about interfaces in Go, people frequently think about empty interfaces and forget about non-empty interfaces exposing the minimal set of functions needed for solving the given generic task.
My concern was (and sort of still is) that people would start writing Go libraries with gratuitous abstraction, like they do with other languages that embrace generics. “Just don’t use those libraries” isn’t much of a consolation if that’s the only available library for a given domain nor if the gratuitous abstraction becomes a property of a vast chunk of the ecosystem as with other languages. I’m happy to report that my concern has so far been misplaced–the Go community has done a fantastic job about exercising restraint.
Readability suffers: more symbols, one letter types, etc…
Is that true? IIRC, if so, wouldn’t they have been added a long time ago, with less resistance both from the PL creators and the community? I’ve (unfortunately) only recent had to delve back into Go, after previously using it professionally a few years prior.
Yes. This is true. Every one agrees that generics are beneficial. Luckily, go language designers, besides knowing quite a lot about benefits of generics, are also keenly aware of the costs. The costs are the reason why this isn’t a no-brainier feature to add in 0.1 version of the language.
As my old hardware prof told me, even the most crappy CPU feature have uses! “It is useful” can’t be a reason to add something, you need to compare it to the costs.
I’m glad to see the “used with restraint”. I was worried (and still am concerned) that people were going to write gratuitously abstract libraries like we see in other languages. But so far I’ve been happy with the degree to which people have used them appropriately in the ecosystem.
Weird logic:
You mean, you’re already busy implementing by hand something the language could have helped you express better.
Yes, because it’s far better when you traverse your own hand-made ad-hoc abstractions than standardized ones.
Why adding useless abstractions in the first place? The code must be as simple as possible. Abstractions must be added to the code only when they are really needed. From my experience the need in abstractions in Go is very rare.
A leading question if I ever saw one. With a premise that X is useless, of course a language is better off without it. But it’s silly to say that all the language features listed in the article and present in many languages but not Go are simply useless. They are all various levels of useful, with pros and cons, and interactions with the language’s other features.
For example, take Go’s simple
(ok, err)
tuple vs Rust’sResult<T, E>
generic enum. Rust has tuples too but they decided to use a higher-end abstraction, with many utility methods and more complicated codegen. But it’s IMHO a very useful abstraction that reduces the mental load compared to the Go solution : no need to worry about API that can return both ok and err non-null, and much better ergonomics, including the beloved?
operator. Sum types (generic or not) are great, once you’ve used them in Rust, you’ll wish most languages had them.I’m not arguing whether Go should add support for enums, just trying to show that good abstractions decrease the user’s mental load, not the other way around.
And furthermore, it provides additional signalling for free: in Go, most erroring function return either a datum or an error. But not all of them, IO functions notably may often return both.
From time to time a gopher will think it’s a ding against
Result
that you can’t do that, but it’s actually an advantage: in Go only the documentation tells you the difference, if things are even clearly documented, and it’s easy to be lured into a habit. In rust, if you need something like that you’ll get an odd return type, maybe a tuple of a result and a value, maybe a bespoke enum, maybe aResult<(Int, Err), Err>
, etc… and that tells you that something strange is indeed going on.Even more important, IMO, is the difference in correctness here: Go will allow you to just forget to handle an error. It’s the worst of both worlds: more verbose and tedious to use, and less safe.
The reason is to grow the community. Adoption of Go is hindered by the absence of certain expected abstractions. Your definition of simplicity encompasses understanding what will happen under the hood with every line of code. Python refugees will have a different metric of simplicity, which will look a bit more like “how uniform is my codebase?” There is a huge amount of performance headroom in Go to a Python refugee; things that might cost 10% of the performance are probably not even going to be noticed. If you are writing a simple REST server or client, going from Python to Go is a huge improvement, the only cost is that now you have to do the error handling in this irritating way, there are no generics, inconsistent iteration, etc.
Ultimately, the real question isn’t “why do we need these abstractions” but “what is Go trying to be?” Because if Go is about performance and simplicity primarily, you’re absolutely right and these features are motion in the wrong direction. If Go wants to grow and expand its userbase, it will have to be more inviting to people who are not primarily concerned about performance, and those people will be asking for quality of life improvements like they see in other languages, which to you are going to be unnecessary performance-reducing complexities.
I have some sympathy for their position here. And it’s a similar philosophy to e.g. lua. There’s trade offs in both directions, and neither extreme is perfect.
Every software expands until it finally supports sending email, and every programming language grows until it eventually supports template metaprogramming (or something of equivalent expressive power). It’s unavoidable if the language grows to mainstream levels of adoption. At some point someone will come up and make “Yup”, marketed as “Go but simpler” and the cycle will repeat.
The corollary being, “Just implement something equivalent to Lisp macros from the start.”
If you do it from the start, the rest of the system will be built with it in mind, at very least
Go already implemented sending emails more than a decade ago so it is about time it enters the metaprogramming space.
https://github.com/golang/go/blob/master/src/net/smtp/smtp.go
[Comment removed by author]
At work I would always hear PMs talk about how software engineers are often “too close” to the problem to really understand it effectively, but I don’t think I’ve run across a good example in the wild until right this moment.
Realizing with a night of sleep that the above was quite a bit meaner than I really intended; I do think I actually agree with the thesis of the article. Go is “the simple but serious industry programming language” and I appreciate what it’s doing as an actual experiment in software engineering processes. I personally don’t enjoy writing Go and am more attracted to languages with fun type capabilities like Rust, but I do think the basic thesis of Go as a language is worth preserving without trying to turn it into Rust-flavored sparkling water.
What surprised me with Go was that it was static but not fully safe. And the excellent quality of a lot of software produced with it.
Also it’s interesting to see that fans of Lisp promoted the language as superior years long*. Lisp is one of the most extensible language but Go, the anti-thesis of Lisp, is was made A LOT of programmer productive. It’s good to challenge its own prejudices.
*: I love the language too but would not qualify it generically as superior. Superior in some context okay, yes.
INTERCAL has great support for AOP. I consider the ease with which a feature can be implemented in INTERCAL as a fairly good benchmark of how bad an idea it is.
Thanks for ripping this strawman.
Go kinda has weak inheritance in that you can embed structs in other structs and get the embedded struct’s methods as top-level methods on the holding struct. A simple example: https://go.dev/play/p/5y1jztBjApj. And I’m sure that a lot of people (probably correctly) would argue that isn’t true inheritance. I’d mostly agree. But it gets you most of the way there, which is basically the Go ethos for language features. I don’t need
AbstractBeanFactory
s, and if for some reason I did there’s always Java.The main property of classical inheritance in programming languages is the ability to access base class’ fields and methods from the derived class’ methods, e.g. derived class may change base class behaviour. This is impossible in Go - embedded anonymous structs do not have access to base struct. In other words, embedded structs do not know anything about base struct and cannot modify its’ behaviour in any way.
This eliminates the whole class of issues related to classical OOP, when you cannot say anything about what’s going on in the running code by just reading the code of base class.
Unlike seemingly everyone else I agree.
The main reason for using Go over other language is that it set up with a specific mindset, that seems to be chipping away, because it’s not hip or fancy.
It’s also a bit sad that your point, which seems to be “things have trade-offs” seems to be largely ignored and it sounds like everything think you hate the idea of iterators or generics in general.
But it’s a bit expected. I hoped Go would not go down that road and offer an exception to that rules that languages will become bloated over time and have very mixed style of software/libraries depending on when it was created, making it harder and harder to read through other people’s code.
It seems like programming languages most of the time end up there, or do a big breaking change, like Python 3, etc.
The typical “solution” is to jump on a new language every couple of years so one has a more minimal language again, and not a new style of programming for every decade or so. Really unsatisfying.
I think it’s also a problem in making decisions. For example take the Go project’s surveys. It’s usually ranking what you miss the most, what’s the greatest burden, etc. But maybe I chose Go, because that’s the trade-off I agree with the most. It feels like people add features to languages until nobody likes it anymore, then everyone jumps to the new language until the same happens. So we end up with dozens, well, hundreds of languages with usually the same bloated feature set.
That’s a brilliant observation, thanks!
If every language gets “bloated” (a word that is criminally overused and I really dislike), then maybe there is a good reason for that? Like, I’m sure you like using libraries that solve a/the problem for you, and they make really good use of a more expressive language.
I hope you don’t mind the longer response too much. I did put things into more words in the hopes of not being misunderstood.
Yes, people add features until people abandon the language and switch to something that is still consistent, because it didn’t (yet) have a history of features being added bringing inconsistencies, ten ways of doing the same thing, etc.
Sometimes then the decision is made to break with the old, which is effectively creating a new language (because your code won’t run anymore). See Python 2 -> Python 3.
But it’s not just languages. There is projects who created new major versions removing features intentionally. Even Web Frameworks, like Express and I think Django did that too. It’s not that nobody used those features anyways, but it’s accepting that just having more features isn’t good enough of a reason to have them. And if you are careful about adding them, be it in the language or through libraries you invest into the future. Sometimes that’s not the goal though. Sometimes Ruby + Ruby on Rails is exactly what you want and need. Is it bloated? I think so. They did sometimes over-extend as well. And often they have to put effort into keeping support, for example through configuration options or specifying when a certain class was created and things like that.
So yes, there is reasons. That’s not my point. My point is why do we have languages that turn from being simple to huge just to rewrite them in their simpler version over and over. Yes, there is good new concepts spreading sometimes, and there is taste and stuff, but that’s not the only reason. Going for a new feature is always a trade-off. And if you do that too often the scale at some point reaches a point where people consider it bloated. It’s a different amount for different people and there is surely people that will embrace the new functions and people who don’t mind breaking things by removing the old way of doing things or don’t mind having a million ways to do things. That’s all completely fine. I just don’t want every language going that way and it felt like Go wasn’t for a while. There were talks of core developers of how Go is pretty much done and how little change outside of stdlib and general stuff (newer platforms, GC improvements, etc.) should be expected. That changed. Not sure why, but it did, which means a reason for choosing this language in first place slowly goes away.
I do my best to avoid adding a lot of extra language. I try to keep at a minimum. I do my best to not have everything being super generic. I try to keep the developer in mind by reducing the language you have to understand. I do that intentionally. I strongly avoid any libraries that add features just for the sake of having them. I look into issues and check if made up use cases for features lead to a rejection.
I don’t have a problem with people thinking differently about languages, programs, etc. That’s why I always argue that people should use languages that fit them, that are made by people with a similar mindset, make similar decisions on trade-offs, etc. And it’s one of the main reasons why there is need for more than one languages. It’s not the only of course, because can also have hard limits/design decisions. But that’s not what this discussion is about.
I completely understand why people choose Ruby, Perl, and very expressive languages. They make it easy to in few “words” express relatively complex things. But that’s not what Go set out to do at all. And if you look into Rob Pike’s and Russel Cox’s history you’ll find that they have a history of writing and using software that was decisively “minimal”, that did not have easy to implement features that would have allowed people to make good use of them.
To be fair, yes, it is overused. What I mean with bloated in this context is when things are added not because they fit any project/language/software goals, but because they can be added, maybe because they a trendy, maybe the allow for a cool demo, or maybe because they make something easier. Again, I don’t say this is somehow invalid, but there is projects that intentionally say no to features even when there is a Pull Request, it’s well implemented, there is more than one person pushing for them.
And a lot of projects start out with clear goals that kind of get washed down until they become generic. And that’s then the very same thing that people complain about when it becomes too much. That’s why you have effects when reading through code, and also for example libraries that you can tell when each library was written, because that’s when this hip feature and that kind of writing software was super popular.
If you stick to a more minimal set you might have to write more, but code becomes easier to follow, read and reason about. If there is less features there of course is less stuff that can go wrong. And I think libraries are a good analogy, which is why I try to avoid libraries, especially for simpler things. It depends on how libraries are used though. Every function you add is a function you have to understand in all contexts the function can be used, with all inputs, outputs and edge cases should they exist. That’s a mental overhead, that especially when things go wrong you want to avoid, also to not introduce more problems.
I have to say though, as someone who doesn’t just rant about Perl, without even having really used it. You can also use expressiveness to make things simpler to understand. For example if you have the right words to express logic instead of a minimal set you can reduce mental overhead. However, you essentially need a lot more self discipline to not just use it so it’s easy to write (but hard to read).
So it’s really not defined through language completely. However it can push you into a certain direction. But so can for example a community. See the C code out there. There is lots of it that even experts struggle with and then there is code that looks like a lot of the OpenBSD codebase looks like, that is simple to understand, but often through having limited APIs, especially none that have edge cases, or with which you can shoot yourself in the foot even though that might make things simpler.
An example is how OpenBSD unlike most others doesn’t have a way to get the current executable.
I like using standard Go packages. I even like using standard Go packages, which use generics such as slices sync/atomic. They are well-thought, easy to use and hard to misuse. I usually don’t like using third-party generic-based packages, since they are usually overcomplicated. That’s why I’d prefer if generics were limited only for writing standard Go packages.
Take a look at Guy Steele’s excellent presentation titled Growing a Language. I really disagree with this notion that only standard lib developers get to use some features of a language. Many stuff shouldn’t go into the standard lib, but still requires more expressive features.
Then just switch to more expressive programming language.
A big, fat, opinionated runtime, lack of optimized compilation mode, and a willful, fundamental misunderstanding of the role of assertions in programming language design.
I love how everything Go does turns me off. How many times have I read something along the lines of “After long discussion, <cool but perhaps advanced feature> was decided to be too complicated for the simple minds that we designed Go for so was left out of its design”
Not to start an argument, but experience had taught me the opposite. A younger me would perhaps be like, “naw, that’s too simple, home something powerful” and I would feel clever when using advanced features and be cool doing it.
After a while though, I figured that I’d rather keep it simple and then be smart when I really need it. Because that way I don’t inflict my wise ass ideas onto the rest of the team, and the poor sucker who’s going to be maintaining my crap for the next 19 years.
Furthermore, experience had also taught me that the more concrete the problem I’m solving, the more this holds. If I’m working in a small product team where my changes are deployed in a few weeks, simple practical stuff is good, and even the junior can fix the code directly in production if needed.
On the other hand, in a big corporate team, I am so far removed from any real problem that I’m inventing my own crap and smart ideas and wise assumptions like there’s no tomorrow.
I mean, it’s not a problem writing smart stuff. It’s even everyone in the team does it, and they’re not the same kind of smart.
So it’s always been about the practicality for me. Maybe that influenced my world view.
I like to compare advanced/complex language features to spice. A plain meal is fine and edible, but perhaps a bit boring; throwing in a bit of spice can enhance the dish; a world-class chef can push the boundary and use more spice in the same dish than someone at home could, but without ruining it; and adding too much spice, whether done by a novice or a chef, makes the food inedible and leaves behind only pain and regret.
Functions, arrays, structs, and loops are the meat and potatoes of programming; generics, macros, async, closures, reflection, dependent types, etc. are the spice. A small sprinkling of those advanced features can improve a code base; very good programmers are able to combine more advanced features without creating a disaster; but if we use too many features, we just create pain for anyone who wants to understand and maintain our code.
The usual actual criticism is that Go is not simple but rather is simplistic. And the fruit of hard-earned experience is the understanding that simplistic is not as practical as it first seems – simplistic programming often boils down to “I got the wrong answer, but I sure did get it quickly and easily!”
That’s not true given my experience with Go. It has very good balance between simplicity and usability. I enjoy writing programs in Go. I enjoy maintaining and extending large codebases in Go. I enjoy the ease of reading and understanding others’ code in Go.
I’m afraid that generics, generators and other “advanced” features will complicate Go too much, so it will become yet another bloated programming language with many ways to write unreadable and unmaintainable code.
Years ago I was reviewing a patch someone had submitted to a project I worked on. One part of the functionality was resetting the sequence objects that yield auto-incrementing primary key values in a DB.
The patch implemented this by hard-coding a “big” (but not actually that big) number and just always setting the sequence to that value.
This is “simple” and even “practical” in the sense that it actually does work for a lot of tables. You could potentially use this for a long time and never run into a problem, and all the while you could sneer at people who insisted that you needed a more complex “bloated” solution.
Every time I look at Go, it reminds me of that patch. There are just so many choices in it that opt for superficial “simplicity” and for sweeping all the complexity and edge cases under the rug. The infamous “I Want Off Mr. Golang’s Wild Ride” gives a few examples of this, but I’ll pile on another: Go’s allegedly “simple” error handling, which ends up being so lacking for real-world use cases that Go error handling is as complicated and fractured as people like to claim packaging in Python is.
Not only does Go’s approach not yield actual simplicity (ask five Go programmers how they handle errors and you’ll get twenty different suggested techniques and libraries), it doesn’t even avoid most of the issues people point out with alternatives like exceptions. For example, every serious approach/library for Go does some sort of wrapping of errors every time they’re encountered, which means that when an error occurs you’re paying a compute and allocation cost in every stack frame between the original error and whatever code stops propagation, just as you would with a
try
/catch
in a language with exceptions. So in the name of “simplicity” Go ends up being more complex than it needed to be – you still pay the costs of exception or exception-like strategies, but without the consistency and clarity of having one obvious way to handle things. It’s a very penny-wise/pound-foolish thing.I’m sorry, but I didn’t understand the example with the sequence object.
As for error handling in Go, I agree that it may be tedious deciding how to deal with the returned errors - whether to handle them in-place or to return to the caller. If the error is returned to the caller, you need to decide whether to return the error as is or to wrap it into another error with the additional context about conditions and the location from where the error is propagated to the caller. This additional context can help understanding conditions which led to the error, e.g. it simplifies debugging in production.
Other programming languages “solve” error handling complexity in two ways - either via exceptions or via easy to use syntactic sugar, which allows proxying the error from lower functions to the caller.
The main problem of exceptions is almost impossible exception safety. Almost all the code written in the language with exceptions support contains literally hundreds of bugs, which trigger when some rare error occurs. See this article for details.
Programming languages, which simplify proxying the error from lower functions to the caller via syntactic sugar, encourage returning all the errors to the caller without thinking whether this error must be handled right now instead of propagating it to the caller. This may result in not so robust code, which doesn’t handle some errors well, comparing to the code in Go.
So, proper error handling is hard. Some programming languages encourage writing code, which executes without issues in happy path, and breaks with hard-to-debug issues on rare errors. Go forces thinking more about proper error handling. Hopefully, this leads to more robust, easier to debug programs.
Imagine you have a database table, and the primary key of that table is an integer that should be incremented for each row. So the first row inserted will get a primary-key value of 1, the next will get a value of 2, etc.
A sequence is a database-level object which yields the incrementing values. Databases provide this functionality because they can implement it in a transaction-safe way (i.e., even if two pending insert transactions can’t see each other, the sequence object can ensure they each receive distinct values).
After some types of database modifications/manipulations, you will want or need to “reset” the state of one or more sequences. The right way to do this is to issue a query to find the highest in-use primary-key value in the table, then set the sequence’s state to yield a value higher than that. The wrong way, which is what that old patch did, is to say “eh, 10000 is probably high enough that nothing’s using a higher value, we’ll set the sequence to that”. If the table in question already had more than 10000 rows inserted, this will make the sequence re-issue previously-used primary-key values, which will cause integrity errors when trying to insert new rows.
The examples given in that article are not avoided by Go’s error handling. In Go it is equally possible to have an error occur in the middle of a sequence of operations, and to have it occur in a way which produces an invalid partially-completed state for that sequence of operations. Uninitialized fields, for example, are kind of an infamous gotcha in Go, and Go’s approach to errors makes it easy to end up with them. The one advantage exceptions have is that they immediately break the control flow and propagate themselves until caught – Go’s errors do not do this, so you can easily and dangerously keep going after accidentally failing to notice a non-nil
err
value, resulting in inconsistent or incorrect states that are hard to debug and diagnose.Which gets back to my point: the supposed simplicity from Go’s approach does not materialize. Instead it is simplistic, and actually ends up introducing more complication than the supposedly “complex” and “bloated” alternatives would have.
Thanks for the description of the issue with sequence object!
As for the error handling in Go, it is trivial to notice and fix unhandled error or improperly handled error by just reading Go code. This is almost impossible to do when reading code written in programming language with exceptions. See another article, which explains this in more details.
That’s not true - Go always initializes all the fields and variables to zero values, contrary to C or C++. See this playground example.
Adding to what ~ubernostrum said, if variables are default-initialized when there isn’t an explicit initializer, then the compiler cannot warn you that you forgot to initialize it.
In my experience (in C) it’s almost always the case that I either have an explicit initializer for a variable, or I have some complicated control flow following the declaration to work out what its value should be. In the complicated case, it’s really helpful if the compiler can tell me when I missed a branch.
An alternative solution (like Rust) is to always require an explicit initializer, and allow expressions to contain complicated control flow. This is probably better than Golang or C – I find I have a stronger dislike to sprawling initializer expressions in Rust than to divergent control flow in C, and that dislike helps me to keep things simple.
Automatic initialization of variables and struct fields to zero values provides the following good properties in Go:
it eliminates bugs related to missing initialization (like in C).
It reduces the amounts of code needed for the initialization to zero values (variables and struct fields need to be initialized to zero most of the time).
It allows using zero field values as default values in large config structs, so users of these structs need to fill only a small fraction of needed non-default fields.
As mentioned by others, it only makes it deterministic/memory safe, it doesn’t make it a valid representation of state. Say, you have a date, all of its fields are 0 - is that a valid date? Probably not, depending on how you encode it.
I think you should read the articles you keep linking. They are mostly concerned with “what if an error happens in the middle of these operations”, which is a real problem but is not a problem unique to exceptions and not a problem that Go magically fixes for you. It is super-duper easy to write bad wrong Go code that’s full of bugs related to partially-completed operations!
And your latest link even admits, as a cop-out at the end:
That was written nearly 20 years ago. There are programming languages which have RAII as a syntactic construct now. People know about transactions now. Tutorials and reference documentation use these features now.
If you get to cite a 20-year-old article on how not enough sample code uses these things, I get to say Go is impossible to use because 20 years ago it didn’t exist yet.
When I say “uninitialized” in the context of Go, do not interpret that as “this struct will contain pointers to uninitialized memory”. Interpret it as “this struct will contain fields set to whatever Go’s defined zero-value is for those field types”, which is a bit of a mouthful to say over and over again.
And this is a real problem and a real source of gotchas in Go, because in any error handling model based on disrupting control flow (whether by throwing an exception in other languages, or by an early return with a non-
nil
error in Go) it is entirely possible to wind up with a struct in the wrong state and difficult to tell.Suppose there’s an “open bank account” function which creates a new
BankAccount
struct and optionally applies a deposit to it. I am looking up a newly-created account, and itsbalance
is zero. How can I tell, from looking up that account, whether its balance is zero because the customer did not make a deposit, or because the deposit processing failed partway through and left theBankAccount
with only its default Go-zero-value for thebalance
?This is the downside of having the struct always be “valid” – you gain the ability to persist or pass around things that you shouldn’t be able to persist or pass around, and get your system into an inconsistent or even broken state.
And don’t tell me the answer is to run everything in a transaction and roll it back – the article you linked does not allow transactions as an answer!
Come on, this is just dishonest, not even you believe that. Surely go handles error cases better with the million randomly placed empty if err, or printing a random word for an error, not even getting a stacktrace, having to grep for the error code in the source code..
Exceptions are great, because they make it harder to just silently swallow errors (the worst outcome of it all), unlike go’s C-inherited errno does.
When hiring I try to find the people who reached this stage. Normal adults :)
I will you of this if things go weird at my current job :)
That’s a correct approach - but you can’t achieve that with Go, that’s the problem. Writing a library - as opposed to a concrete program - simply requires more expressive code and Go under-delivers here.
That’s not true from my experience. I wrote numerous open-source Go packages. Some of them are relatively popular (see here and there ). But I never felt the need to use generics in these packages.
That’s what “code reviews” are for.
I literally just went through one where I had to retract much of my “cleverness” due to (valid, after thinking about it) pushback, in most cases.
I’m in Elixir btw, which seems to occupy this magical space of both “is accepted as a language people get actual work done in” and “is a functional language”.
…the Python/Java programmers that Go was designed for.
Go was designed for people who aren’t yet programmers (writing Python at university does not make you a Python programmer). The oft-shared Pike quote:
This quote is misunderstood, allow me to link and old comment of mine: https://lobste.rs/s/tlmvrr/i_m_programmer_i_m_stupid#c_itjpt0
I mean that still makes sense in that it was for both new grad Google employees as well as existing Google employees (who’d have to stomach a ‘simpler’ language). Given that Google was always heavily weighted towards Python+Java.
[Comment removed by author]
I write performance-oriented software in Go. It works fast, sometimes even faster than the corresponding software in Rust (see this case ). Go allows writing fast yet simple code. The only thing, which bothers me is that it doesn’t optimize hot loops in any way. It doesn’t unroll them and it doesn’t vectorize them. So the only solution to get fast performance out of hot loops is to write them in optimized assembly. But this has two drawbacks:
It would be great if Go compiler could do this work on itself.
Not to be The Guy Who Evangelizes Rust but Rust has the assembly thing down nicely, FWIW.
I don’t believe writing asm in golang is especially difficult compared to other asm, it’s mostly the writing asm part which is a big step in complexity corresponding to not doing so.
With the caveat that Go uses a bespoke pseudo-ASM, which is separate from the architecture’s own standard assembly but does not entirely abstract it.
Assuming Rust actually is taking some of Go’s project share (source ?), I’d say it’s more about Rust becoming easier to use (language features, ecosystem…) and more mainstream than in Go/Rust becoming faster.
Rust and simplicity in the same sentence look unreal :) Could you provide a few Rust vs Go code snippets, which show the advantage of Rust code over Go code regarding simplicity PoV?
As kivikakk said, “easier to use” is not the same as “simpler”, just like using a complex washing machine is easier than washing by hand.
In the programming world, a useful way to look at things is that each task+constraints has an inherent amount of complexity, and that the only choice you have is where that complexity should live. It gets pushed from the language to the stdlib to the 3rd party libs to the top-level code to the end-user. If your language handles more of the complexity, your code doesn’t have to. Exposing “only as much complexity as needed” is hard, there is no best answer. Go and Rust have taken very different routes, but they’re both great languages.
Another misunderstanding is that I said Rust is getting simpler to use than previous Rust versions, not necessarily simpler than Go. The language and stdlib get QoL improvements, the borrow checker gets smarter, the surprising limitations get lifted, the tools get refined, the crates get more numerous and high-quality, etc. Rust has a reputation for being hard, but it’s not as hard as it used to be. The “Prefer Go because Rust is too hard” argument is still strong, but getting weaker.
As for Rust code actually being simpler than Go code, there are some small examples like
let res = failible()?
vsres,err := failible(); if err != nil { return (nil, err) }
or*counter.lock().unwrap() += 1
vscounter_mu.Lock(); counter += 1; counter_mu.Unlock()
(both simpler to use and harder to misuse), but toy examples like these are arguably not the most interesting comparison.@moltonel didn’t actually say simplicity anywhere; they said “easier to use”. Simple and easy are very different concepts.
I wanted to use generics to solve a problem my Go problem had but the lack of support for methods was a cold splash of water on that idea. I gave up.
Why did you want to use generics instead of ie. interfaces?
There’s lots of things that generics can represent that interfaces cannot, and there may be performance advantages too.
I don’t believe the latter is true in Go. The Go implementation of generics uses dynamic dispatch for everything. They could add some monomorphisation, but you can also monomorphise functions that take interfaces and they haven’t done that either.
Not quite. Code is generated from generics for each “shape” of value. So not for each type, but also not just one variant either.
Regarding interfaces: if the compiler can prove the type behind an interface, it can devirtualize. This can also be informed by profile-guided optimization.
I see! Likely just a program design issue in that case.
I do find generics useful, though it would be nice if they could be used in methods as well.
I’m not especially excited about the new iterations thing though. Seems complex and funky to me. I can only hope it will end up being better in practice than I image it will be.
Both generics and iterators are useful for some applications. But they have costs related to increased complexity of both the programming language itself and the code, which uses these features in inappropriate places.
Frequently it is better to stop adding new features and instead focusing on polishing existing strong features.
I think your take is a breath of fresh air. I’m also rather annoyed that languages keep bolting on features from every other language out there, even if it doesn’t fit the language (although personally, I lean more towards powerful languages, used with restraint, but that requires a very disciplined dev team).
It sounds like iterators could’ve been a great fit with Go if designed in from the start, perhaps in a more explicit way so it’s clear where the function calls are happening. It would’ve reduced the inconsistent mess of different standard iterator functions they describe, and there wouldn’t be two ways of doing things.
As someone who was firmly in the “I will not touch a language without generics” camp, I must admit that I kinda like the current state of generics : the warts push towards not using them if possible, but they are here if needed.
… and people continue asking for missing parts of Go generics. BTW, iterators is a good example of one of such parts - they allow iterating over generic types in a unified way. I bet the next generic-related thing, which will be added in Go, is generic functions at generic types. The next one is the ability to specialize generic types and generic functions. The next one is function overloading. And so on. Just look at the history of C++.
I’m curious why Go didn’t go with external iterators? Type satisfying a Next() interface can be passed to range. This is what I think I wanted but I’m sure there’s a reason why it isn’t like this.
This is explained extensively in the original proposal.
defer
, while with external iteration you’d have to instantiate the iterator,defer
a cleanup function, then actually do the iteration.range
’s desugaring more complex as it would need to perform introspection of user defined method sets, rather than direct dispatch on builtin types (this overlaps with issue 1).Internal iteration, while less flexible, is also much easier to optimise especially when not doing much optimisation at all, you inline functions and end up with basically merged bodies which you can optimise relatively easily.
In fact, while Rust originally used internal iterators and switched to external for a multitude of reasons back in 2013 it later reintroduced opt-in internal iterators via
try_fold
specifically for the purpose of optimisation: https://medium.com/@veedrac/rust-is-slow-and-i-am-the-cure-32facc0fdcbI think one area where the proposal and discussion around it did themselves some damage is that “parameterized” iterators are all implemented using a closure pattern. But in reality you could just as easily implement an ancillary iterator object storing the parameters which would then expose an iterator method. It’s a bit longer (as you need to create the bearing struct) but it avoids nesting and provides for more local reasoning.
Thank you for the explanation and the links!
https://research.swtch.com/pcdata explains the motivation for the coroutine-style approach.
I appreciate the gigantic amounts of research and work Russ Cox did over the last two years for adding iterators into Go. But it would be much better for the Go ecosystem if he spent these efforts on optimizing hot loops in Go instead. Simply admit adding generics and iterators in Go was a mistake and start working on optimizing hot loops.
Thanks to Russ Cox, Go has sane module management system (aka go mod), which works great most of the time. Thanks to Russ, we have RE2 regular expressions based on deterministic finite automate (DFA). (BTW, it would be great optimizing regexp package in Go, since it is quite slow comparing to competitors).
That’s literally just like your opinion.
The latter is a natural consequence of the former. The draw of DFA is that they have very deterministic performances (unless you explode the DFA size).
The cool thing about that is… it probably does not require complicate langage-level changes and buy-in, so you could work on that.
All your considerations seem to be rooted in the idea of Go being a high-performance langage, but it’s never been that. It’s always been a “probably fast enough” langage. And it’s notably always put compilation speed first, which complex optimisations like auto-vectorisation definitely go against.
DFA must be faster than NFA all the time. The issue is that regexp package in Go is written in unoptimized way. This is OK for the initial implementation added to standard library before the first Go release. But this isn’t OK now, since regexp package is actively used by Go applications.
You’re right, the difference I meant to make is FSM versus backtracking.
The latter is subject to more edge case failures, notably exponential backtracking, but there are also relatively common cases where a backtracking engine will generally beat an FSM.
In my experience, FSM engines also suffer greatly as soon as captures are involved, much more so than backtracking engines, this I observed repeatedly on both re2 and rust’s regex.
With the backwards compatibility guarantee in mind, there are only two options: development stops, or features are added. Therefore, there is no right direction to evolve in, according to the author.
I was skeptical of generics, but then I did a project where they saved me a ton of code and simplified things tremendously. I am skeptical of the iterators too… We’ll see.
Could you share some details about the code where generics helped simplifying things?
The slices package is a great one. Everything in there was previously done either with error-prone copy&pasting, codegen, or unsafe functions.
I was working with a big and complex graph model that contained over a hundred types of entities. For reasons that are difficult to explain here, I wanted to implement them all as different Go types and have the option to instantiate a full set of CRUD endpoints for a type with a single line of code. This could likely be achieved with interfaces alone, but now I can do it simple like this:
And I can
GET
andPOST
right away at/things
.So the
NewEntityAPI
function instantiates theThing
in some trivial way such asvar x Thing; return &x
. While this saves from a single line of boilerplate code per eachNewEntityAPI
call, this doesn’t look like significant improvement worth adding generics to Go.If the
Thing
implements some interface such ashttp.Handler
, then pure interface-based code would be simpler to work with than the generic-based code.If you have hundreds of different types with different functionality, this means you already have a lot of custom non-generic Go code, which implements all this functionality. Saving a line of code per each
NewEntityAPI
call with generics doesn’t look like an improvement in this case.Not really. I just checked, Entity API is a type that combined with its methods runs a little over 350 lines.
When reviewing code, it is better to see the actual code and understand its goals. I cannot share that here, but if you are really interested, I am happy to set up a call and walk you through it.
The right direction is to improve existing strong sides of Go:
Another area that would be worth exploring would be the “deprecation of stuff”.
There is already a way to mark some code as deprecated, but I think the tooling around could be improved (especially if a replacement is available). https://github.com/golang/go/issues/50847
Having a way to ensure that new developments does not use deprecated stuff would be very valuable, I think.
I thought we were taking mainly about the language specification and the standard library. Even if the new feature is much simpler, adding it and keeping support for the old way increases complexity. As explained with the iterator example:
I don’t think that holds much water. Previously if you used something which should be iterable you’d go to the docs thinking “now does this thing implement some form of iteration and if so how”, hunting for something looking vaguely like what you want with no idea as to its shape. With rangefuncs, your first thought should just be to look for that, and I guess slices as a fallback. And to a reader, iteration looks uniform instead of needing to recognise the iteration pattern for that specific API.
The argument against improving iterators because they’ll have to carry the older ones forever is uncomfortable for me. It seems to reduce to “get it right first time”. Perhaps they need a stronger way of signalling “never use this function” in the API and documentation?
What’s different is that the for loop version supports continue, break, goto, defer, return, while the first version doesn’t. You even mentioned that in the post. You provide an example utilizing none of these features, which would’ve shown complexity of just using closures.
Now try debugging both versions of the code with control flow statements. For example, if the
for ... range
loop containsreturn something
statement, this statement is implicitly converted to some non-trivial return from implicitly created anonymous function, which implicitly passes themustStop
flag together with the returned values to the implicitly created loop body, which returns the actual values to the caller of the outer function containing thefor ... range
loop. Sounds easy to track and debug, isn’t it? :)I am curious, do you have the same debugability issues with defer? I think it used to be implemented in a similar way, by wrapping code into a closure which is pushed onto the runtime stack of closures to be executed on function exit?
That is, the level of semantic and implementation indirectness seems pretty similar between defer and the new for.
Defer doesn’t create implicit functions. It just calls the registered explicit function before returning from the current function.
Looking at the source code, it seems like “calling implicitly created closure” is exactly what it is doing though?
That is, if, in the source code, you write
defer foo(1 + 2)
what actually gets compiled isThat outer
func
is not something you write in the source code, but rather something that the compiler adds for you. This feels similar in shape to the transformation thatfor
does.I think it’s okay for languages to evolve features that you don’t need or won’t use. And I think we should allow for that. In the past, PLs were seen as toolboxes for problems, where the PL features are included so that use of that feature would be reasonably standardized, warts and all. Every codebase of sufficient size ends up developing a mini-dialect of its own. It’s part of the in-essential complexity of programming in the large. No PL can absorb all of it well. Even C, as bare as it is, suffers from this, mostly in the form of macro abuse. Lisp-like languages are just 100% honest about it and grant maximum power.
Because of that, I’m sympathetic to the slow accretion of features in even minimal PLs, drawing from common idioms and pain points.
After the initial success of the design, the language is now being used in situations not envisioned by its creators. This happens when you have a large and heterogeneous community, a consequence of success. I think the average Go developer is discovering they need OCaml, but their brains and their companies are not prepared for such transition; therefore is better to push Go in the OCaml direction.
It is better switching to OCaml now instead of demanding some esoteric feature in Go, then waiting for 5 years until it will be implemented in partial hard-to-use form, and then waiting for another 10 years until this feature will become useful in Go (and Go will become more complex than C++ by this time).
Giving developers the tools to opt into and use features and abstractions isn’t a bad thing. If you end up creating a mess of a codebase because of additional language features that’s your problem.
We’re talking about table-stakes features for every modern programming language here.
Then try explaining the need of C++ style guides like this one. It significantly limits usage of shiny C++ features.
Using C++ is a great argument against scope creep (languages exist on a complexity spectrum). But IMO this isn’t really a strong argument adopting something like generics, this attitude just strikes me as fear of complexity to the point of hindering the language.
Generics allow for (cleanly implemented) containers, and containers are pretty damn practical.
Containers are practical in C++ and Java. Containers aren’t needed in Go most of the time thanks to slices and maps. These dead simple built-in data structures cover the majority of cases where you’d use some non-trivial data structures in C++ or Java.
I write non-CRUD Go code for 12 years (see my profile at GitHub ) and almost never used generic data structures in Go other than built-in slices and maps. The only exception is container/heap, which is very useful for implementing n-way merge.
Of course, I create various custom data structures and containers in my Go programs. But these data structures and containers aren’t generic - they solve the given task in the most simple and efficient way without unnecessary abstractions and bloat, which is usually needed in generic solutions.
My reservations about Go aside, it’s a good point. For a language whose mission is to oversimplify, these examples of evolution seem to go against that mission.
It would be cool if Go decided to change its mission. But to do that really well, you’d have to break backward compatibility…