Portability has long been a cornerstone of Unix, but interestingly enough for different reasons than those I care about. In the book The Unix Philosophy (1994), Mike Gancarz argues that hardware keeps improving at such a rapid pace, it makes little sense to try to optimize software for existing hardware. Instead, it is better to upgrade to newer hardware when available, and ensure the software is portable. So the performance hit of portability is worth it and optimizing for hardware not so much.
In a world where Moore’s law and Dennard scaling have basically ended, an increasing amount of specialized coprocessors, and most systems consolidating onto a few architectures that are markedly similar to each other, I don’t think this holds true.
Dennard scanning ended, Moore’s law didn’t. The number of transistors per chip is still increasing and, importantly, the cost for the same number is dropping.
If anything, the end of Dennard scaling makes this point more important. You don’t get a new computer after 18 months that’s the same but faster. If you want to take advantage of the biggest performance improvements, you need to interact with specialised accelerators. The end of Dennard scaling means that it’s cheap to put rarely-used functional units on a chip. Your power budget may let you light up a quarter of the chip at a time, but you can have different quarters and tune each for different kinds of workloads.
IMHO the principle of preferring fully portable software over hardware-optimised software is limiting and, to some degree, self-fulfilling. It makes sense for the kind of “classic” PDP-11 era software that much of high Unix age material talks about – text processing software with limited (or trivial) parallelism, primarily intended for interactive terminals and maybe networked display servers with limited bandwidth, because that’s what Unix settled for.
As long as you stick to that, it still works very well; but that’s also the only kind of software that it works for, so if you stick to that, you just end up writing PDP-11 software for increasingly faster PDP-11s.
(That’s why it was a particularly good idea in that technological age, as you’ve mentioned. The rate at which we crank out increasingly faster PDP-11s is nowhere near what it was in 1994. That quote is very much a product of its own age).
Nothing wrong with writing software for increasingly faster PDP-11s per se but it’s important to be remember that it’s not the One True Way to program. There are lots of interesting things to do with computers which you can’t really do if you stick to that.
However, I don’t think that’s really representative of NetBSD’s approach to portability. NetBSD has (had? It’s been ages since I haven’t touched it…) a good abstraction layer for drivers. The “standard” way to approach portability would be to have well-defined interfaces for coprocessors, and have portable, generic code delegate computationally-intensive tasks to those (obviously, in the form of non-portable code). That’s a solid model but it’s certainly not specific to NetBSD, it’s how things have been happening pretty much everywhere for 25 years now. It was less common for personal computing in 1994, I guess, but *checks calendar* oh shit…
The post makes it sound like Linux users are prepared to merge ugly code because it performs better. But in many cases, it should be possible to have both.
Most new software is not written because the hardware changed. It’s because some component in the stack changes, requiring a rewrite.
An example is Rust. Surely you could argue that Rust code is higher quality on average, since lifetime invariants are embedded in the code instead of being implicit like in C. So if NetBSD values code quality, they should be migrating to Rust? Are they? I suspect that they’re not, in the same degree as e.g. the average Linux user. Note how this is orthogonal to hardware changes.
Another example is systemd. I suppose the NetBSD community would argue that systemd is too monolithic, too large, or otherwise unnecessary. But the systemd community would surely argue that there are correctness concerns that a shell-based init system does not provide. I wish this post offered concrete arguments on this front too.
There is way more to code quality than lifetime invariants. To me how the program is structured and architected could much more important to quality (as always in computer science, it depends).
So just saying “if you really care about quality you should use Rust” feels a bit reductive.
You’re totally right. I just think it warrants a mention. NetBSD could say that they can’t use Rust because it isn’t stable enough and they value that too. I just thought the post was excessively dismissive of Linux without explaining how it is a trade-off.
In the book The Unix Philosophy (1994), Mike Gancarz argues that hardware keeps improving at such a rapid pace, it makes little sense to try to optimize software for existing hardware. Instead, it is better to upgrade to newer hardware when available, and ensure the software is portable. So the performance hit of portability is worth it and optimizing for hardware not so much.
In a world where Moore’s law and Dennard scaling have basically ended, an increasing amount of specialized coprocessors, and most systems consolidating onto a few architectures that are markedly similar to each other, I don’t think this holds true.
Dennard scanning ended, Moore’s law didn’t. The number of transistors per chip is still increasing and, importantly, the cost for the same number is dropping.
If anything, the end of Dennard scaling makes this point more important. You don’t get a new computer after 18 months that’s the same but faster. If you want to take advantage of the biggest performance improvements, you need to interact with specialised accelerators. The end of Dennard scaling means that it’s cheap to put rarely-used functional units on a chip. Your power budget may let you light up a quarter of the chip at a time, but you can have different quarters and tune each for different kinds of workloads.
IMHO the principle of preferring fully portable software over hardware-optimised software is limiting and, to some degree, self-fulfilling. It makes sense for the kind of “classic” PDP-11 era software that much of high Unix age material talks about – text processing software with limited (or trivial) parallelism, primarily intended for interactive terminals and maybe networked display servers with limited bandwidth, because that’s what Unix settled for.
As long as you stick to that, it still works very well; but that’s also the only kind of software that it works for, so if you stick to that, you just end up writing PDP-11 software for increasingly faster PDP-11s.
(That’s why it was a particularly good idea in that technological age, as you’ve mentioned. The rate at which we crank out increasingly faster PDP-11s is nowhere near what it was in 1994. That quote is very much a product of its own age).
Nothing wrong with writing software for increasingly faster PDP-11s per se but it’s important to be remember that it’s not the One True Way to program. There are lots of interesting things to do with computers which you can’t really do if you stick to that.
However, I don’t think that’s really representative of NetBSD’s approach to portability. NetBSD has (had? It’s been ages since I haven’t touched it…) a good abstraction layer for drivers. The “standard” way to approach portability would be to have well-defined interfaces for coprocessors, and have portable, generic code delegate computationally-intensive tasks to those (obviously, in the form of non-portable code). That’s a solid model but it’s certainly not specific to NetBSD, it’s how things have been happening pretty much everywhere for 25 years now. It was less common for personal computing in 1994, I guess, but *checks calendar* oh shit…
The post makes it sound like Linux users are prepared to merge ugly code because it performs better. But in many cases, it should be possible to have both.
Most new software is not written because the hardware changed. It’s because some component in the stack changes, requiring a rewrite.
An example is Rust. Surely you could argue that Rust code is higher quality on average, since lifetime invariants are embedded in the code instead of being implicit like in C. So if NetBSD values code quality, they should be migrating to Rust? Are they? I suspect that they’re not, in the same degree as e.g. the average Linux user. Note how this is orthogonal to hardware changes.
Another example is systemd. I suppose the NetBSD community would argue that systemd is too monolithic, too large, or otherwise unnecessary. But the systemd community would surely argue that there are correctness concerns that a shell-based init system does not provide. I wish this post offered concrete arguments on this front too.
There is way more to code quality than lifetime invariants. To me how the program is structured and architected could much more important to quality (as always in computer science, it depends). So just saying “if you really care about quality you should use Rust” feels a bit reductive.
You’re totally right. I just think it warrants a mention. NetBSD could say that they can’t use Rust because it isn’t stable enough and they value that too. I just thought the post was excessively dismissive of Linux without explaining how it is a trade-off.
Hell yeah.