Not really. PostmarketOS has put in the work to get it functional, but AFAIK it’s not being upstreamed because Alpine is not interested.
See https://wiki.postmarketos.org/wiki/Systemd and relevant entries in the blog.
Well….
Grub reimplements several filesystems (btrfs, xfs, ext* etcetcetc), image formats, encryption schemes and misc utilities.
A good example is how luks2 support in Grub doesn’t support things like different argon implementations.
Or… You could not. If you focus on EFI you could use an EFI stub to boot your kernel. Or boot a UKI directly. None of it needs systemd.
Booting directly into an UKI/EFI binary is not particularly user-friendly for most users, which is the missing context of the quote. The statement isn’t about needingsystemd, it’s about offering a replacement to grub.
Booting directly into an UKI/EFI binary is not particularly user-friendly for most users
I’d say it’s the opposite. An average user doesn’t need to pick a kernel to boot. They use the kernel provided by their distribution and don’t want to see any of the complexities before they can get to the browser/email/office suite/games. It’s us—nerds—want to build our own kernles and pick one on every boot. Supermajority of users don’t need multiple kernels. They don’t know what a kernel is, don’t need to know, and don’t want know. They don’t want to know the difference between a bootloader and a kernel. They don’t care to know what UKI even is. They’re happy if it works and doesn’t bother them with cryptic test on screen changing too fast to read even a word of it. From this perspective—average user’s point of view—UKI/EFI stub is pretty good as long as it works.
If you want to provide reliable systems for users you to provide rollback support for updates. For this to work you do need to have an intermediate loader. This is further complicated by the Secure Boot that needs to be supported on modern systems.
I’d say it’s the opposite. An average user doesn’t need to pick a kernel to boot.
sd-boot works just fine by skipping the menu entirely unless prompted. Ensuring that the common use-case does not need to interact, nor see, any menu at boot.
Isn’t this built in to UEFI? EFI provides a boot order and a separate Next Boot entry. On upgrade system can set Next Boot to the new UKI and on successful boot into it system can permanently add it to the front of boot order list.
This is further complicated by the Secure Boot that needs to be supported on modern systems.
I’m not certain what you’re referring to here. If you say that this makes new kernel installation a bit more complicated than plopping a UKI built on CI into EFI partition than sure, but this is a distro issue, not UEFI/UKI issue. Either way it should be transparent for users, especially for the ones who don’t care about any of this.
On upgrade system can set Next Boot to the new UKI and on successful boot into it system can permanently add it to the front of boot order list.
UEFI is not terribly reliable and this is all bad UX. You really want a bootloader in between here.
This allows you to do a bit more fine grained fallback bootloading from the Boot Loader Specification which is implemented by sd-boot and partially implemented by grub.
I’m not certain what you’re referring to here. If you say that this makes new kernel installation a bit more complicated than plopping a UKI built on CI into EFI partition than sure, but this is a distro issue, not UEFI/UKI issue. Either way it should be transparent for users, especially for the ones who don’t care about any of this.
UKIs are not going to be directly signed for Secure Boot, this is primarily handled by shim. This means that we are forced to be forced through a loop where shim is always booted first which then would boot your-entry.efi.
However, this makes the entire boot mechanism of UEFI moot as you are always going to have the same entry on all boots.
UEFI is not terribly reliable and this is all bad UX.
I used to boot my machines with just the kernel as EFI stub and encountered so many issues with implementations that I switched to sd-boot.[^1]
Things such as:
Forgetting boot entries whilst not having any mechanism to add them back.
Not storing arguments.
Forgetting arguments if the list order was changed.
Forgetting arguments on EFI upgrades.
Deduplicating logic that only could store one entry per EFI partition.
I’m surprised any of them did anything more than boot “Bootmgfw.efi” since that seems to be all they’re tested with.
Fortunately, nowadays you can store the arguments in UKI (and Discoverable Partitions Specification eliminates many of those). The rest was still terrible UX if I got anywhere off the rails.
[^1]: Even on SBCs, Das U-Boot implements enough of EFI to use sd-boot, which is a far better experience than the standard way of booting and upgrading kernels on most SBCs.
I love fish…except that it doesn’t have ctrl-O (execute line then select the next command from history) and I don’t know what the workaround is. I switched anyway, but every time I have to do something repeatedly that takes more than one command, I feel like I must be missing something. (Am I?) I have ctrl-R, blah, ctrl-O ctrl-O ctrl-O in my fingers from years of bash/zsh.
In bash/zsh, after you find a command in history with c-R, you can go up and down in history with c-P and c-N, and you can execute a command and immediately select the following command with c-O. So if you have a sequence of commands to repeat, you c-R back to the first one, then repeatedly do c-O to execute the sequence. (This is assuming emacs keybindings.)
Fuzzy history search in fish doesn’t seem to be related to the chronological history list. It just pulls out a single command with no context.
So I’m hoping that now c-R is incremental in fish that it can do the bash/zsh thing, but I haven’t looked at the beta yet.
I’m glad I’m not alone in this! It’s extremely rare to see someone mentioning this feature.
In zsh it’s also possible to select multiple completion candidates with Ctrl+o. I miss it even more than executing multiple history lines. There is an open issue about this, but it’s pretty much dead. https://github.com/fish-shell/fish-shell/issues/1898
OpenBSD isn’t even really supposed to be a desktop OS. I’d say it’s more like router firmware. I’m always shocked when someone actually implies they do or have been using it as a desktop OS.
And yes, I know there’s going to be someone who insists they also use it. I’ve also seen people try to use Windows XP x64 Edition well into 2014. Trust me, I have seen no shortage of questionable life choices.
The author of this was previously on the OpenBSD development team. OpenBSD devs tend to dogfood their own OS, so of course she would have used it as a desktop.
This isn’t really true. A few porters do huge amounts of work to keep (among other things) KDE and Chromium and Firefox available for OpenBSD users, and not insignificant work goes into making the base system work (more or less) on a decent variety of laptops. It’s less compatible than Linux but for a project with orders of magnitude less in resources than Linux it does pretty good. But I guess we’ve finally reached the Year of the Linux Desktop if we’re now being shocked that someone would have a BSD desktop.
I would say that the vast majority of OpenBSD developers are using it as their primary OS on a desktop or laptop. I am shocked (well not really anymore, but saddened) that developers of other large mature operating systems don’t use it as their primary OS. If you’re not using it every day, how do you find the pain points and make sure it works well for others?
We have reasonably up-to-date packages of the entire Gnome, KDE, Xfce, Mate, and probably other smaller desktop environments. We have very up-to-date packages of Chrome and Firefox that we’ve hardened. The portable parts of Wayland have made (or are making) their way into the ports tree. None of this would be available if there weren’t a bunch of people using it on their desktop.
Why?
It comes with an X server, an incredible array of software, both GUI and terminal based applications that I can install. For my needs OpenBSD is a very capable desktop, and more responsive and flexible then the Windows desktop that work gives me.
I have grievances against OpenBSD file system. Every time OpenBSD crash, and it happens very often for me when using it as a desktop, it ends with file corrupted or lost files. This is just not something I can accept.
!!!
For comparison, ext3, with journaling, was merged into Linux mainline in 2001.
I developed embedded devices whose bootloader read and wrote ext4 files, including symlinks. There is really no excuse not to have a journaling file system on a system that’s larger than a fingernail.
For OpenBSD it may be as similar reason as why they got rid of Bluetooth. Nobody was maintaining that code. It got old, stale. So they got rid of it. I generally actually like this approach, sadly you lose functionality, but it keeps the entire codebase “clean” and maintained.
My guess is that they need people to work on the FS issue. Or, rather they don’t have anyone who cares enough about it onboard to actually write the code in a way that is conductive to OpenBSD’s “style”. Could be wrong, but that’s my assumption.
The pragmatic thing to do would be to grab WAPBL from NetBSD since NetBSD and OpenBSD are still relative kin. Kirk McKusick still maintains UFS on FreeBSD so the SU+J works well there but it would be a lot of work to pull up OpenBSD’s UFS.
IIRC WAPBL still has some issues and is not enabled by default.
Its primary purpose is not to make the filesystem more robust but rather to offer faster performance.
I’ve never experimented with SU+J, but I’d like to hear more feedback on it (:
I’m not sure how deep WABPL goes but a journal helps you to close some corruption events on an otherwise non-atomic FS by being more strict with the sync and flush events and without killing performance. You can also journal data which is an advantage of taking this approach, although I don’t know that WAPBL offers this currently. Empirically NetBSD UFS+WAPBL seems fairly reliable in my use.
SU+J orders operations in an ingenious way to avoid the same issues and make metadata atomic at the expense of code complexity, the J is just for pending unlinks which otherwise have to be garbage collected by fsck. A large, well known video streamer uses SU+J so it is well supported. Empirically SU+J is a little slower than other journaling filesystems but this might be as much implementation and not algorithm.
Nice, I had not seen that thread. I think the data journaling they are discussing would be important for OpenBSD as for instance on FreeBSD UFS is used in specific scenarios like embedded devices or fail-in-place content servers and ZFS is used anywhere data integrity is paramount. WAPBL was created in response to the complexity of SU+J as it seems OpenBSD was also impacted by. For that and easier code sharing I would be inclined to go the WAPBL direction, but there may be other merits to the SU+J direction in terms of syncing FFS and UFS against FreeBSD.
This is really surprising. I thought this was one of those instances of OP “handling things wrong”, but actually it doesn’t seem OpenBSD natively supports anything else than FFS and FFS2 (and without soft updates, like above).
C sucks, but until I have a working Rust compiler and the time to rewrite a few tens of thousands LoC in Rust, I’m still going to be interested in what’s going on in the C ecosystem.
There are some things that need to be written in a low-level language and so C or C++ (or assembly) remains the best choice. They implement core parts of the abstract machines of higher-level languages and so are intrinsically unsafe.
There are some things that were written in C a long time ago and so would be expensive to rewrite and, because they’ve had a lot of in-production testing, a rewrite would probably introduce a lot of bugs and need more testing.
And then there are new projects, doing purely application programming tasks, written in C. And these are the ones where I wish people would just stop. Even on resource-constrained systems, C++ is as fast as C and lets you build safe abstractions. For larger targets, there are a lot of better choices (often Rust is not one of them: don’t do application programming in a systems language).
The fun you can have with C is the kind that leaves you wanting to take some strong antibiotics when you wake up because you don’t know what you caught.
The fun you can have with Rust is the kind that leaves you wanting to take painkillers because you had to listen to a fanboy ramble about memory and thread safety all night.
Everything was better in the old days. We should Make Programming Great Again. Where are the COBOL people when you need them?
I enjoy programming. It is still great in my opinion. Lot of good programming literature, languages and libraries are free and open source. What not to love.
Eh, I think they’re cute. It’s a shame when C codebases are deployed in any serious context, but I’m always up for curiosity languages/libraries, especially for preserving older computer architectures. We gotta keep this stuff alive to remember where we came from.
Yeah yeah we’ve all seen the white house report. I can’t remember exactly how their guidance was worded, but I would assume it’s more along the lines of “whenever possible, in contexts where cybersecurity is a concern, prefer memory-safe languages such as Rust or Java over non-memory-safe languages such as C or C++” which is actually very reasonable and difficult to argue against.
I think there are contexts which qualify as “serious”, but where cybersecurity isn’t a concern. Also, IIRC there are some (quite esoteric, to be fair) soundness holes in Rust’s borrow checker (I think there’s a GitHub issue for one that’s been open for like 5 years and they haven’t decided on the best way to resolve it yet). Furthermore, rustc hasn’t been formally verified, and then of course there’s always unsafe code where memory safety bugs are still possible. So I think through use of sanitizer instrumentation, testing, fuzzing, etc. you can get a C project to the point where the likelihood of it having a memory safety bug is on par with what Rust can give you. Take SQLite for example—personally I would have as much faith in SQLite’s safety as I would in any other database implementation, regardless of language.
I didn’t really intend to make this some kind of language debate, but if you insist:
So I think through use of sanitizer instrumentation, testing, fuzzing, etc. you can get a C project to the point where the likelihood of it having a memory safety bug is on par with what Rust can give you.
So you mean… just use Rust. Because Rust is essentially C with all that stuff baked in. That’s kind of the point of it. It’s C with dedicated syntax for verification tools (the ones it’s built with) to verify it, along with some OCaml control flow structures to make code more streamlined. You really can just kind of think of Rust as being C++ 2 (Not C++++ because that’s C# (that’s the pun of the name, if you never knew)).
And I’m not sure why people are still doing this “well the compiler might have a bug or two and you can manually mark things as unsafe so that means its as insecure as C” reach, it really doesn’t make any sense and nobody is accepting that as a valid argument.
I really appreciate this. I am missing one important point for me, how will Ladybird look like from the the security perspective?
The web these days is just outright dangerous and browsers are used for nearly everything. ~awesomekling can you point to a design sheet/thoughts how the browser will be secured?
It was also the day after FreeBSD 13.2 went EOL, but the FreeBSD Security Team decided to provide the back-port anyway (upgrading to 13.3 should be painless, since it has binary backwards compatibility, but people that don’t want to move can still get this update).
Self hosted VPS from IONOS (ionos.com) in an UK datacenter. Apparently, the UK IP ranges have a good reputation and I never had any issues (compared to IP ranges from Hetzner which is a near sure guarantee to end up in Spam).,
I find it interesting that OpenBSD’s people believe in NOT letting services restart, assuming that a service will only crash if it’s under attack, and a stopped service can’t be exploited.
I wish the daemons I ran were so reliable that failures only happened when under attack.
Restarting a few times is often fine. There are basically three kinds of attacks:
Those that are deterministic. Restarts don’t matter, attacker always wins.
Those that are probabilistic. Some small proportion of victims lose each time, restarts increase this.
Those with a probabilistic exploration phase. Each restart gives the attacker some data that they then use in the next phase.
The last category is interesting and is the basis for things like ASLR: you learn something on the way to the crash, but you can’t then use it after the restart. At least in theory, often there are holes in these things.
Things like the Blind ROP attack need a few thousand restarts on average to succeed. With a 10ms process creation time, that’s a few seconds. The stagefright vulnerability was a problem because Android had only 8 bits of entropy and automatically restarted the service, so just using a constant value for your guess and repeatedly trying would succeed more than half the time with 128 guesses and that took well under a second.
Exponential back off is often a good strategy for this kind of thing but if availability is not a key concern then stop and do some postmortem analysis is safer.
It’s also worth noting that, even when not actively attacked, reconnecting clients will often do the same thing repeatedly. If a particular sequence of operations leads to a crash, restarting may still crash immediately when the client that triggered the bug reconnects and does the same thing again.
Please read the linked email from Theo again. He writes:
If software has a bug, you want to fix it. You don’t want to keep running it.
No mention of attacks.
Bugs can and will happen. If you auto-restart services you might miss a bug because the systems gives you the feeling everything is alright. I am a BIG fan of fail loud and fail hard. Then you can analyze the situation and fix it.
I don’t believe that it’s a given that these attacks cost money.
Granted, my experience dates back to the mid-00’s (2003-2004) but the majority of DDoS’rs were skiddies with bot-farms.
In fact, I was part of a group of people who would disassemble malware submissions (since those used to be publicly downloadable) and take over botnets via their C&C (usually IRC, sadly). For me (and those people) running the botnet itself was free.. it’s all other peoples machines.
(I should note that I never did anything nefarious after taking over networks, usually we did this to piss off botmasters so we self-destructed the botnet most times)
I think you’re looking at the supply side, rather than the demand side: I think that these days, the resources associated with botnets like these are worth lots of money, and most of the people who assemble them know this and charge handsomely for their use.
Sure, but in practice internet crimes like this tend to go uninvestigated by law enforcement; in the absence of either serious real-world consequences or an incredibly rich and powerful victim it’s probably not on anyone’s radar.
yes, but we should start taking such serious and treat crimes as such. If nobody demands investigation, the authorities slack, but shouldn’t. Who should start caring, if not we?
In past incidents like this at other infrastructure companies it turned out that some specific project was hosting content that some political group found objectionable. So they just turned the DDOS firehose on the whole infrastructure.
A specific example: in 2015 the Chinese government attacked Github with their “Great Cannon”, apparently because they didn’t like a couple of firewall circumvention tools hosted there.
This should be the first argument of this post. Listing that at the end is like apologizing for the rest of the post, like “no no, don’t be afraid!”. Using Firefox is not hardcore activism, there’s no big tradeoff here. Actually most users should feel no difference after switching, except they won’t have their Google avatar constantly reminding them their personal data is being sucked up. Once it’s settled that Firefox is as good as its competitors, the privacy and free-software arguments would be even stronger IMO, because it would seem crazy not to give credit to the awesome piece of free software Firefox is.
Firefox address bar works worse than Chrome’s for instance – Chrome tries to autocomplete entire URLs that are used often, Firefox does that one path segment at a time, which is not what most people want, I assume. Password autofill on Android often doesn’t work. On the desktop, some UI elements are disproportionately large to how often they’re used. Chrome’s are more optimized. Firefox is a wonderful project which I’m thankful for it but lacks some polish.
Chrome tries to autocomplete entire URLs that are used often, Firefox does that one path segment at a time, which is not what most people want, I assume
Oh, that is actually a feature that might make me switch to Firefox! Safari has the same behaviour as Chrome here and it’s really annoying. If I want to go to github.com/org/project, I don’t want it to autocomplete the four GitHub pages I’ve visited most recently, I want it to autocomplete github.com/ and then let me type the character of the org, tab, first characters of the project, tab, and get to the right place. I end up either typing the full URL or bouncing via a search engine in Safari because of this.
Password autofill on Android often doesn’t work
Last time I checked, on macOS it also didn’t use the Keychain (Chrome now can, including for Passkeys). I trust the macOS keychain a lot more than I trust Firefox to be secure. It has some very fine-grained ACLs for which applications can access which passwords.
I trust the macOS keychain a lot more than I trust Firefox to be secure.
Your trust (or distrust) is well placed. Firefox passwords are encrypted with a key derived from a master password, but if the user doesn’t explicitly set one, it’s the empty string. More to the point, it’s a known constant, so anyone with knowledge of the format can reconstruct the key and decrypt the credentials.
There are several open source tools that do this, and I managed to write one myself.
I thought they were going to make the default randomly generated at profile creation time. Have they not done that yet?
I have no idea; my focus was on recovering my own credentials from years ago.
But it wouldn’t matter. You already have a randomly generated salt. If you’re not asking the user to provide the master key, then you have to store it on disk in cleartext. Randomizing the key is just adding more salt.
I don’t recall the randomized salt being used for the key. I thought my program worked without using it, on any nss databases that weren’t password protected…
It’s been a while though. And the thrust of your original comment is obviously correct. I was just surprised because I thought I remembered that this was going to be changed in a way that would make it very modestly better.
The feature I still miss from Firefox since I stoped used it 15 years ago is to able to type “git proj” and it would show the beat match on the completion list. I find the multi step workflow you mention too slow.
Chrome never matched this fuzzy matching behaviour.
Firefox does support this, some settings may influence this. I just tried ’git and the first URL suggested is the proper one.
Maybe some “special” settings I have done related to search, I have only enabled “Provide search suggestions” but have disabled all 3 options below in about:preferences#search and in about:preferences#privacy I have disabled “Search engines”. So when I type something into the URL bar I only get URLs from open tabs, history and bookmarks.
To also narrow down your search to e.g. bookmarks only, type a * first and then your search term. There is also % for open tabs or ^ for history. Settings including additional installed search engines are in about:preferences#search below at “Search Shortcuts”.
I spent a little while puzzling over this, then I guessed: “best match”?
What does “best” mean in this context? I don’t care much about this but several people are comparing Chrome autocomplete and Firefox autocomplete, and I never even noticed a difference in something like 28 years on Mozilla browsers. Can you explain what “the best match” means?
This was a huge deal when it was launched. Firefox awesome bar, they used to call it.
I didn’t explain well… But, you could type words separated by spaces and it would show results that matched those words on both the URL and the title of previously visited pages. I have just checked, they removed matching in the title and don’t even show it anymore…. They are pretty much a chrome copycat at this point.
Here is video showing it. Notive that the tittle doesn’t show up anymore nowadays. Which is ironic, given google is trying hard to push users into no being able to interact with urls directly.
https://www.youtube.com/watch?v=U7stmWKvk64
You can configure Firefox Password Autofill to use a third-party app (at least on mobile: I use my Nextcloud Password app, which is probably less secure than macOS keychain but is probably secure enough, and it works like a charm).
Right, I’ll eventually have to set up a VaultWarden instance. What’s stopping me is I haven’t fully figured out how to do LetsEncrypt TLS on my private server without a DNS domain. It is possible but requires some tinkering.
They have full autocomplete: browser.urlbar.autoFill.adaptiveHistory.enabled. Not sure why they’re holding back making it a default, or even adding it in the settings.
It was on by default on Nightly initially, but there were a lot of complaints (including mine) so it was disabled, I don’t know if it was ever re-enabled by default. It looks like there were several cycles of bug fixing.
Does Chrome still do the thing where URL bar autocompletion includes ads for companies? I remember a couple years ago back when I was using Chrome I typed “j” to go to a bookmark I had created that was literally named “j” and that I visit multiple times a day, but this high-relevance, exact match got ranked lower than “JC Penny”, which I have never once searched for or visited.
It wasn’t even an ad for a product, just a “hey, we thought you might like to search for this corporation?” Utterly baffling.
This is something I have never thought about, so I don’t have a strong opinion on whether Firefox’s or Chrome’s behaviour here is better… but come on. That so incredibly minor. Surely nobody is choosing Chrome over Firefox because the URL bar autocomplete functions ever so slightly different on Firefox. It’s not like Firefox doesn’t suggest entire URLs, you just may need to hit the down arrow to select it slightly more frequently than with Chrome. And when you’re viewing an actual web page, the only two things taking up vertical space is the tab bar and the URL/search/plug-ins/navigation bar, just as with Chrome.
I know folks who have gone back to chrome because of slight differences in the omnibar behaviour. I think they liked that they could easily search a site from the omnibar or something?
I have about:config on Android. I’ve got two Firefox based browsers installed on Android, both from the F-droid app store. They are Fennec and Mull, both have about:config.
Unfortunately, neither one has that specific config key.
I haven’t noticed the segment behavior. However, I am required to use chrome for work and its address bar works SO MUCH worse than firefox’s, especially in terms of autocompleting from browser history - chrome can never find things that I was browsing even a few history entries ago.
Really actually I think Firefox is not actually really worse than the competitor, but the phrase “actually really good” indicates that if I’m not already actually skeptical, I really actually should be before considering Firefox to be good.
most users should feel no difference after switching
I have tried to switch to Firefox many times over the years, but running on Ubuntu, the Firefox UI feels noticably clunkier and less polished than Chrome. Too much unused whitespace and padding everywhere (the URL box, for example, doesn’t stretch to fill the horizontal space). An inconsistent mix of different font sizes and line heights. It very much has that awkward GTK / Java Swing feel to it, and I find it distracting. I would love to see them address some of these design issues, as it’s the main reason I haven’t switched yet.
Have you not removed those “horizontal flexible spaces” on both sides of the URL box? Customize toolbars to drag & drop the things you want there and not. I always remove those spaces and add two icons for plugins I actively use (Multi-Account Containers and Undo Close Tab). Result: I can type a 3400 pixel long URL – 89% the width of my 4K screen.
While cryptographically novel, the security impact of this attack is fortunately very limited as it only allows deletion of consecutive messages, and deleting most messages at this stage of the protocol prevents user user authentication from proceeding and results in a stuck connection.
The most serious identified impact is that it lets a MITM to delete the SSH2_MSG_EXT_INFO message sent before authentication starts, allowing the attacker to disable a subset of the keystroke timing obfuscation features introduced in OpenSSH 9.5. There is no other discernable impact to session secrecy or session integrity.
I disagree with the comments about rewriting in Rust. Instead of creating a separate project / branch that is maintained separately for a while, they should be incrementally rewriting functions and structures in Rust. If I were looking at my project and 40-80% of CVEs were directly attributable to a tool or pattern I was using, I would hope that I have the insight to say “this tool sucks”.
Help curl authors do better
We need to make it harder to write bad C code and easier to write correct C code.
Hasn’t this been the goal for decades? How do we actually make it harder to do wrong? Rust’s answer to this question sounds infinitely more appealing than C or C++ (including with all the linters).
they should be incrementally rewriting functions and structures in Rust
I think you and the author are more in agreement than you think. He also says they are not going to rewrite it from scratch, and their “this is what happens” points 1. and 2. are exactly “incrementally rewriting functions and structures in Rust”:
Step 1 and 2 above means that over time, the total amount of executable code in curl gradually can become more and more memory-safe. This development is happening already, just not very fast.
I think it’d be better to say they could be, not “they should be.” As he notes in the post, the current developers aren’t experts on rust or the best folks to lead such development. Also:
“The rewrite-it-in-rust mantra is mostly repeated by rust fans and people who think this is an easy answer to fixing the share of security problems that are due to C mistakes. Typically, the kind who has no desire or plans to participate in said venture.”
If you aren’t volunteering to do the work, it’s best to hypothesize about what a project could do and not what it should do. If you are volunteering to do the work, then have at it and let us know how it goes following your proposal.
I think everyone agrees that memory safety is better than not, but the author highlights that (1) Rust isn’t supported on all targets and (2) they would likely need a new team of maintainers with the experience and interest to rewrite in Rust and also the dedication of the existing team. In particular, the article notes how many of the “rewrite it in Rust” proponents are themselves willing to step up to the plate.
I would say that’s a lot less realistic, curl is a huge piece of software with a large API footprint. And furthermore even less useful: curl is also a massive brand and wide spread utility, its users are unlikely to switch, especially indirect ones (e.g. users of curl because for all intents and purposes it is php’s http client).
Finally, Jose Maria Quintero’s effort on librsvg has conclusively demonstrated that you can progressively migrate a codebase in flight.
I’ve migrated gradually a few codebases, and even wrote a C-to-Rust transpiler, and I don’t think it’s worth converting codebases this way.
This is because C has its own way of architecting programs, and has its own design patterns, which are different than how a Rust-first program would be structured. When you have a C program you don’t even realize how much of it is “C-isms”, but after a gradual conversion you get a very underwhelming codebase with “code smells” from C, which is then merely a first step of a major refactoring to be a Rust-like software.
Rust’s benefits come from more than just the borrow checker. Rust has its own design patterns, many of which are unique to Rust, and often not as flexible as an arbitrary C program, so they’re tricky to retrofit into a C-shaped codebase. A good C program is not a good Rust program.
I wonder how much curl is actually used now. When I first used it, it handled a load of different URL schemes and protocols and reimplementing that was far too much effort. These days, I rarely see it used for anything other than HTTP. I’ve used libfetch instead of libcurl because it’s a tiny fraction of the size and, even then, does more than I need.
The one sentence everyone who thinks we should rewrite everything in Rust shall take away is the following:
Dedicated long-term maintainer internet transfer library teams do not grow on trees.
Starting a rewrite is easy. Maintaining a rewrite over nearly two decades so that every car/IoT/whatever vendor includes, is the hard part! It doesn’t matter which language the rewrite is in.
NodeJS was first released in 2009, Python 3.0 came out in December 2008 and Go initially came out in 2009, so a LTS version of an OS from 15 years ago wouldn’t have any of those. Fine, you wouldn’t need them. But a lot of software has started to require those. This might mean that with a security fix on a new version of some software, you might not even be able to build that new version on those old systems, to compare behaviour in a backport, for example.
People already make fun of Debian for being outdated when it comes out, and its regular releases are about 2 years apart. The crazy fast pace of the software industry makes it madness to support anything for more than a few years.
We’ve all gone collectively insane, and the LTS terms just reflect that fact.
802.11n was standardised in late 2009, in 2008 most WiFi would have been 802.11a/b. Newer drivers require new bits of driver stack, so no support for fast WiFi.
TLS 1.2 was just introduced, most things in 2008 were TLS 1.1 or earlier, which has known (exploitable) vulnerabilities. Anything using these would be vulnerable.
The latest C++ standard was C++98, anything using ‘modern’ C++ features (i.e. things that make it plausible to write memory-safe C++ code) depends on at least C++11.
The Core 2 was only two years old. People talked about dual core, not multicore. Anything shipping then was unlikely to get any speedup from parallelism. Apple introduced libdispatch in 2009, most things that used threads in 2008 did so to avoid blocking not for parallel compute. More modern software will often get a speedup from increasing core count but that isn’t true for software from 2008. With dual core, you often still just used one core for your compute-heavy workload and one core for everything else. With 4+ cores, you want to split your compute-heavy load across most of the cores.
SSDs were incredibly expensive. Storage stacks assumed spinning rust and had a lot of bottlenecks that showed up when seek times dropped by a couple of orders of magnitude.
CUDA was barely a year old and unified shader architectures didn’t exist as a concept. The abstract machine exposed by GPUs has completely changed since then. You can still run code that targets those GPUs today but the driver model for newer cards is completely different (kernel bypass is now the default, most of the fixed-function hardware is gone) and so supporting newer hardware or code written to take advantage of them is hard.
HSPA was barely deployed, mobile Internet meant UMTS (or GPRS / EDGE if you had an old / crippled device like an iPhone). The iPhone 3 (with UMTS / HSPA) support shipped that year. Mobile data contracts often had double-digit megabyte monthly data limits. Anything written for these had aggressive data-saving modes that make no sense now and limit functionality.
AArch64 was announced in 2011 and shipping hardware didn’t exist for another year or so. Anything written in 2008 is completely unable to target the most widely deployed architecture in the world.
MD5 was commonly used for digests. In 2005, a practical attack on MD5 was published. Anything using MD5 is vulnerable and so the industry shifted to SHA variants, which included changing message formats and protocols. It wasn’t until the end of 2008 that using MD5 was widely known as bad practice and so a lot of things from 2008 still used MD5.
The whole class of speculative execution vulnerabilities were unknown, nothing from 2008 included mitigations for any of them.
This isn’t just software doing new things gratuitously. New hardware has to conform to constraints inherited from physics and has required new software models at various layers in the stack. If you are happy with what an old computer could do, you don’t need to adapt to these changes, but if you want to be able to take advantage of new functionality then you need to change how the software works.
New attack techniques have required new and different defences. It’s been military doctrine for a hundred years (except, famously, for the French, who subsequently learned) that static defences don’t work. The same is true for computer security: even formal verification can only make you immune to known categories of vulnerability attackers come up with new ones.
That said, it’s also interesting to see how much this progress has slowed down. 15 years before 2008 took you back to 1993. In that period:
Operating systems that used MMUs for process isolation became mainstream.
64-bit Workstations replaced 32-bit ones. The Alpha was still new in 1993, the first 64-bit SPARCs appeared in that time.
WiFi went from not existing to being fairly common. 802.11a/b were introduced in 1999, WiFi was commonly available in cafes / hotels by 2008 and supported by most laptops (sometimes via an add-on card).
Home broadband Internet became ubiquitous. In 1993, few people had Internet access at home and those that did used a MODEM, with 28.8 Kb/s being the fastest available speed. By 2008, 10 Mb/s connections were common.
The web existed. 1993 was when CERN made the HTTP protocol and code for the web browser and server freely available and the year Mosaic was released (so people who didn’t have a NeXT workstation could use a graphical browser!). By 2008, Google Docs was two years old and delivering desktop-like apps in a browser, eCommerce was a core part of businesses. Amazon had grown to being one of the world’s largest retailers and every supermarket in my area did web-based ordering and delivery.
Mobile phones became ubiquitous. In 1993, GSM was basically voice only. Mobile phones were expensive things for super-rich people and business users. By the late ’90s, a lot of students had them, by 2008 they were ubiquitous and smart phones were actually able to run non-trivial applications. Mobile data was still expensive, but these phones also supported WiFi so were useful for data-driven things even without a data contract.
Encryption went from being a thing that you used for a few high-value assets to ubiquitous. SSH completely replaced telnet, hard disk encryption was available in consumer operating systems, IMAP / POP3 servers all required TLS by 2008 (they’d mostly been unencrypted in 1993, but it was less of an issue because there was usually a single hop from your home computer to your ISP’s mail server). Let’s Encrypt didn’t launch until later but there were services like StartSSL that let you have a free SSL / TLS certificate and so even individuals could use HTTPS for their web servers without paying money. A lot used self-signed certificates before then.
Instant messaging went from nowhere to ubiquitous. ICQ launched in 1996. XMPP, MSN Messenger, Skype, AIM, iMessage, WhatsApp, Signal, and so on were later but now it’s rare to find someone who isn’t on at least one of these services.
Display resolutions and technology increased hugely. In 1993, 640x480 was common, 800x600 was starting to appear. When Windows 95 came out, it recommended 1024x768 but worked with lower resolutions because they weren’t yet ubiquitous. By 2008, I was using a 23” 1920×1200 TFT monitor (which was two years old consumer hardware). The increase over the next 15 years was far less impressive. The shift from CRT to TFT was huge[1], OLEDs are displacing TFTs slightly in a few places but most users probably don’t notice.
GUIs became ubiquitous. In 1993, Windows 3.1 was only a year old. This was the first version of Windows that really got mainstream adoption. Most serious business computers ran WordPerfect or Lotus 123 in DOS. Windows 3.x ran on top of DOS and often you’d quit Windows to run a single DOS application, since many DOS apps didn’t work properly in the Windows DOS box.
Sound became ubiquitous. The PC internal speaker was just about good for beeps. Sound cards like the SoundBlaster were available but were not ubiquitous. By 1997, Intel had defined the AC97 spec that meant rich sound output was a default thing.
RAM increased by orders of magnitude. The computer I had in 1993 had 4 MiB of RAM. The computer I had in 2008 had 4 GiB, a factor of 1024 increase. My new computer bought this year has 96 GiB, a factor of 24 increase (and that’s a lot more than is common, whereas the previous ones were more in line with normal users).
CPU speeds increased hugely. The Pentium was introduced in 1993, running at 60 / 66 MHz. This was a fairly simple dual-issue superscalar in-order architecture, so at most two instructions per clock and usually closer to 1.5. By 2008, the Athlon 64 X2 was out, with an out-of-order microarchitecture, wider issues, and running at up to 2.8 GHz 42x speedup just in clock speed), twice as many cores, bigger caches, SIMD instructions. This was easily 100 times faster than anything available in 1993 for a comparable price. The delta between 2008 and 2023 is far less pronounced, I’d guess closer to a factor of 20 speedup (most of that from more cores, but a reasonable amount from wider superscalar cores).
GPUs went from rare 2D accelerators to ubiquitous programmable things. By 2008, Intel integrated GPUs had vertex and pixel shaders (GPGPU was still fairly novel. I read papers doing general-purpose compute on GPUs in 2005, but they mostly used very expensive GPUs. CUDA was out by 2008 but cards that could run CUDA were expensive, computer capable of running GL / DirectX shaders for graphics were common).
The Multimedia PC standards defined minimum levels from that era. MPC2 was released in 1993 and required a 25 MHz 486, 4 MiB of RAM, and a display that could do 640x480 in 16 colours. The delta between that and MPC3 in 1996 involves more than doubling the specs of most components.
On the other hand, none of the deep learning stuff existed 15 years ago and that’s probably going to involve some interesting shifts in both hardware and software, as things like computer vision and natural-language interfaces become components of consumer systems.
[1] In terms of building planning as well. The William Gates Building in Cambridge was designed on the assumption that every computer scientist would have two CRT monitors on their desk, plus a desktop, so would be generating around 500+W of waste heat. The heating system in the building was specified with that assumption and needed some serious upgrading when the typical usage became a laptop plus an external TFT display.
Kilburn Building at the University of Manchester also has wacky architecture and heating because it was designed expecting to house very different kinds of computers and labs. To my knowledge they have never got the heating to work right.
I’m fairly sure 802.11g was prevalent in 2008 but I might be remembering my experiences wrong, and they might not reflect “most”
Edit to add: parallel Intel CPUs were available if not common back in the 90s; before the core 2 was the core, and there was a quad model; before both of those we had hyperthreading. So thinking in terms of multiple threads was well established.
Awesome response, thanks for that! It’s amazing to see how developments in technology as a whole mostly slowed down as you point out (the improvements from 1993 to 2008 are a lot more impressive than from 2008 to 2023), while at the same time the churn of software seems to have sped up.
I’m not sure the churn has sped up. In the 1993-2008 period, we saw a whole bunch of software changes:
Web views as a way of doing GUIs.
OpenStep was 1992, so just outside that range technically, but the whole MVC thing became mainstream. Win16 apps and Macintosh Toolbox apps were just event loops that drew things.
Win32 was released in 1993 (win32s for Windows 3.1 and NT 3.1), but wasn’t widely used until Windows 95.
The POSIX threading APIs are from POSIX 1997. Prior to that, multithreaded software on *NIX systems all used proprietary (non-portable) threading libraries.
C++ was first standardised in 1998. The Standard Template Library was written in 1994 and many of the ideas from this made it into the C++ standard library.
JavaScript was first released in 1995.
DirectX was released in 1996. Prior to that, games had used device-specific APIs for sound and (if they used them at all) accelerated graphics. DirectX 3 included an immediate-mode API that modelled a fixed-function pipeline. It also included a retained-mode API that gave you a declarative scene graph. The retained mode APIs went away long before 2008 and by 2008 DirectX gave you a programmable pipeline.
QuickDraw (MacOS) in 1993 was the fastest way of rendering graphics in a GUI. It gave you a direct view if the frame buffer for your window, letting you completely bypass the windowing system. By 2008, modern display managers were all compositing, so you’d render to a texture that the window server would then composite on the GPU. By 2008, most windowing systems didn’t give you a way of directly accessing the frame buffer at all.
Related to the previous point, by 2008 graphics memory expanded to the point where it was possible to buffer every window and fast GUI toolkits started to be more about caching and parallel rendering to different textures than about drawing things as fast as possible. The big energy savings on the iPhone came from caching rendered views and not redrawing them, rather than drawing them more efficiently.
In 2008, I wrote a book about Cocoa programming and most of the APIs are the same. SwiftUI is new, but I can write an Objective-C app using it as a reference and it will still act and feel like a modern macOS app. Win32 is mostly the same (though that’s not a good thing - it’s still mostly as bad as it was 30 years ago). POSIX2008 introduced a bunch of new things (including xlocale) and anything written for POSIX2008 is likely to work well on Linux/*BSD/macOS. HTML5 was released in 2008 and is mostly the same - there are new JavaScript APIs, but an HTML5 thing from 2008 will work the same.
It’s easy to cherry pick examples to argue in either direction.
Perhaps I’m thinking more of breakage - in olden days it seems backwards compatibility was more important, whereas it seems nowadays you are expected to keep up and rewrite your application all the time (think frameworks). Although I also remember endless fiddling with DirectX because some games required an older version while others required a newer one, but that could’ve just been DLL hell.
I think that’s very culture dependent. For some of the open source projects that I’ve been involved in, it’s been really important, for example:
FreeBSD: The kernel and userspace ABIs must be backwards compatible within a major release series. If you make a change that breaks an out-of-tree kernel module, that’s your bug. Between major releases, normal userspace software must keep working, but you’re allowed to break control interfaces (e.g. change how network interfaces are configured in the kernel). The command-line tools are expected to have stable output and a lot of them can produce XML / JSON to avoid people needing to parse human-readable output.
GNUstep: Strong ABI guarantees within a major release, aims for source compatibility between them. If something is going away, deprecate it and keep the compatibility implementation as long as you can. The runtime still supports the GCC ABI. I plan on deleting that soon, but it’s been almost 15 years since compiling Objective-C with GCC was a good idea.
In contrast, for some it’s been totally unimportant:
LLVM: Changing every single API between releases is fine. No need to do deprecation. If you break out-of-tree users, well sucks to be them. This is quite problematic because it’s caused a lot of people to try LLVM and give up when their code completely bitrots over time.
The latter seems to be an increasingly prevalent mindset. I used to blame Google for this. They have an in-house monorepo and cloud-scale refactoring tools, so it’s easy for them to change an API and then refactor all users of it for all of their own code. This attitude leaks into external projects where they aren’t in a closed world but still act as if fixing every consumer of an API is trivial and ignore the pain that this causes everyone downstream. Chrome and Android are both notorious for introducing new APIs and then removing them a year later.
Most F/OSS desktop environments never quite managed to build good APIs to start with and so kept fiddling with them. I really wish the GNOME team had picked GNUstep instead of GTK. At the time, they were similar levels of maturity, but GNUstep had APIs that stood the test of time, GTK had ones that needed redesigning twice and are still not great. If GNUstep had had the same investment as GTK, open source DEs would be significantly better than proprietary ones by now.
API design is less valued now than it was, in part because updates are easier. When I first installed Linux, I had to buy a CD and have it shipped to me. Before that, UNIX was distributed on tapes, Visual Studio was sold on floppy disks. Propagating an API change to consumer was a multi-year endeavour and so came with a huge cost if you build APIs wrong to start with: you’d have to support them for at least two years to avoid breaking brand-new software. Now, you can just push a new revision to git and tell everyone to update their submodules and fix the compile errors.
We’ve all gone collectively insane, and the LTS terms just reflect that fact.
This! Staying with OP’s analogy of a hammer: We’re so insane that we invent a new hammer every two weeks. In the end it can only be used for hammering, however, it is now available in a dozen different colors, it was redesigned about 10 times since the original one wasn’t good enough and the manufacturing process was changed at least 2 dozen times since the old one didn’t use the right tools[tm].
Now we have hundreds of different hammers, some small, some big, some half-broken, some still in development stage and do more or less the same thing. And of course, all of them are in use somewhere in the world. Good luck finding someone providing security updates for 15 years for all of them.
i guess this is about computer that control medical devices.. where none of that are relevant..
But on the other hand those medical devices require certification that cost a huge amount of money.. and that if you do any major changes in the software after you need to get a new certification that will costing a new huge amount of money…
And those medical devices are very expensive because of that and thus hospitals will keep then in use for a long time.. so they need to keep the computer that interface with then around for many years with no major changes in software..
I’m not a huge advocate of LTS (it has its uses) but I don’t agree with these arguments. First of all you’re actually counting the 15 years and not the more realistic target of replacing after 12-13y with a bit of safety margin. Then most of my servers are just like that, but I’m pretty sure if this distro existed I could run a moderately up to date version of dovecot and postfix on it. Or maybe even have Docker.
That’s the horrible thing. I can want a long-lasting LTS for some applications and I can condemn the users of that thing if I want to deploy software that was made in the last n (< LTS) years ;)
First of all you’re actually counting the 15 years and not the more realistic target of replacing after 12-13y with a bit of safety margin.
Do you think something written in Node 0.1 could still run in today’s version of Node? At least with Go, I have a bit more faith that its APIs are stable. And like I said in my post, initial versions of Python 3 had lots of issues that have been fixed over time.
And safety margin, don’t make me laugh - people are still running pre-oldstable Debian servers that have long gone past their supported status. In companies, there’s very often a “if it works, do not touch it” policy. Especially if the person who set the thing up has left the company.
If it’s just a build dependency then you could build on a newer system, though of course you’d need the built binary not to have runtime dependency on anything in the new system, which is a pain to do for some build systems/languages.
I like that some languages do their configuration and build scripts in their own language. If you have that, modern dependency management, and your dependencies are clean then you can easily make fairly reproducible and portable builds with no dependency on the host system. Erlang family languages, Rust and Julia can all do this.
The typical IT person can have a life long career without ever even hearing about OpenBSD. One MUST know nothing about it. Get out of your bubble some more!
The typical IT person can have a life long career without ever even hearing about OpenBSD
A typical person in any occupation can have a life long career without ever knowing about a useful tool. It’s fine to say ‘OpenBSD does not solve any problems I have,’ but being ignorant of it means that you will never even evaluate it.
A lot of IT professionals had happy careers in the late ‘90s and early 2000s, costing their employers huge amounts of money in license fees and compliance, because they knew about Windows and didn’t feel the need to learn about Linux.
At work I notice a lot of clients trying to get rid of their Linux infrastructure. Usually the reasoning seems to be they have people that can admin Windows or i, but not Linux. It seems hiring a Linux person would be easier, but to them consolidation looks that way instead…
It kind of makes sense, doesn’t it? You probably have Windows desktops anyway, so you need staff to do the admin of those, why not make them also maintain the servers? And then of course, corporate software tends to work better with other software of the same supplier, so a Windows server (with RDP, Exchange and what have you) works better for the desktop users due to “integration”. Nevermind the vendor lock-in and exorbitant charges etc etc etc.
The project develops tools that are used by the whole f*cking IT industry worldwide. Such as OpenSSH, LibreSSL or certain libc components that are included in every Android smartphone. Get out of your bubble some more!
I know all that, yet it is somewhat useless information. Is it useful to know how to use OpenSSH? Sure thing, to many people in IT it is. Is it important to know who wrote it? No, it is not. It is irrelevant. Do you remember by heart who wrote all the tools you use daily? I don’t. It is not important to get anything done nor to master these tools.
Do you think the average developer knows who invented git or bash or who wrote gcc or nginx or postgres or react or kubernetes or whatever else? No, they don’t. They don’t care because they do not need to have this information to succeed.
We can all be grateful that these projects exist yet that is not the point. The point is that you don’t need to have intricate knowledge of how they come into existance. You can just use them without it and that is fine.
Is this some sort of performance art, or are you just a grumpy old git who repeats themselves? (Not an attack: I am very definitely the latter myself.)
I think that both times you’ve totally missed the point of this article.
Speaking as a writer, that probably means you haven’t read it. I have lost count of the number of times, in the last year-and-a-half as a daily-published writer online, that I get angry comments from people who manifestly have not read the article. Often then they claim that they have, which directly means that they have the reading skills of a 5 or 6 year old… something I find more plausible than their angry claims of comprehension.
What this article says, which you failed to notice both times, is:
If you work in IT, you should know about OpenBSD;
That means: if you know about, you will be better off;
That means 2 things:
You are using it – as in, you use OpenBSD code – and you ought to know that and be grateful;
You can very probably use the OS yourself and benefit from it;
It then lists a bunch of detailed worked examples of both.
It’s simple, it’s clear, it’s on-point, and there’s really not much to disagree with here.
“People in IT ought to know about it. If they don’t, here is why they should.”
That’s it. It’s not really something amenable to angry denouncement.
This story is a repost, so I don’t see why that’s okay but it’s somehow much worse for @fs111 to have the same opinion about it that they did last time.
If you work in IT, you should know about OpenBSD
Meh. I think I work in IT (whatever that means), I know enough about OpenBSD, and I don’t feel like those two facts have anything to do with each other. OpenSSH is great*, sure, but so is lots of other software people rely on all the time. They don’t need to know who wrote that. The provenance of one of many tools an IT person might use is just not important enough to make anyone better off.
As far as using OpenBSD itself, the article mostly just reads like a personal journey of OpenBSD discovery from one (now) true believer, which is great for them, but it’s not very interesting to me. Cool, it made your third laptop unbootable and you fixed it by disabling some of the laptop’s features and building a custom kernel. If I want to spend my free time breaking my stuff and then fixing it again, I can manage that without OpenBSD.
The pf example seems the most convincing to me (I’m not a fan personally, but trying to put myself in the shoes of someone who’s never heard of it), but it’s one of many in a list of examples that generally seem pretty weak. (For example, as a recovering mail admin, I’m not very impressed by “faster than exim + spamassassin”).
I think this sort of title is par for the course and in a world with plenty of overt clickbait I wouldn’t personally have bothered to complain about it. But it is a silly title, IMO: it suggests the article is going to give the reader some reasons to care about OpenBSD but the content seems more suited to existing OpenBSD fans.
* but it has a lot of the attributes that OpenBSD hates when it’s any other software—namely huge confusing config, code that’s only legitimately used in weird configurations, and features that are hard to implement securely
I am commenting because I am disagreeing with the authorative title in conjunction with the content presented under that title. Judging from the upvotes I seem to be not the only one.
If the article would have been called “OpenBSD - the story so far” or “What I like about the OpenBSD project” I would have not written the comment.
I think you’re missing the point, of the piece and of the title, and deliberately being hostile and confrontational about it, and I don’t know why.
(Aside: OpenBSD has Theo de Raadt and as such has no need at all of more people to be hostile and confrontational. ;-) )
What your proposed titles mean are:
“I like X and here is why”.
What the actual title means is:
“You already use tools that come from X, and so here is why X itself could be useful to you and save you money.”
That is not the same. It is not being controversial or challenging; it is saying “here is useful knowledge you might not have”. Apparently you take that as an affront, and I call this out both because it seems to me that you are grossly overreacting, but that you have been grossly overreacting for a year now and you have not assimilated the knowledge you were offered then and still have not.
If someone gets angry when asked “Hey, did you know X?” in 2022, then it is an odd and unreasonable response for them to get angry that they don’t know it. But if they are asked again a year later, and they are still angry, then that moves beyond odd and into borderline irrational.
There is something refreshing about doing git clone followed by make and it just works. No complicated toolchains to install, no endless build times. Wonderful!
I (usually) have this experience with repos that have a flake.nix file and a direnv trigger. But, that requires Nix (and optionally, direnv) to be available. In theory, it would work for every possible software project, though. And forever, which is the key… The fact that this one “happened” to compile, today, on your machine, doesn’t mean it wouldn’t break in the future, or on some other machine or OS, once the environmental assumptions are no longer met.
Is it really too much to be happy that a tool just compiles and a user is happy about that? Yes the code is not written for the year 2068 or 2349 or whatever and I don’t care. I saw a tool here did a git clone && make and it worked and that made me happy. I don’t care about the magical nix OS or direnv (great tool, but besides the point). I don’t care about this working on a Unix that has been abandoned 15 years ago or some hardware platform that was fresh before the Berlin Wall fell. It is irrelevant to me. It worked for me. Had it not worked then I would have not called the POSIX police but probably shrugged and moved on with my life.
I don’t see how what I said is mutually-exclusive with what you said.
I definitely appreciate when something “just works;” I’m also acutely aware of how lucky/“brittle” such a situation is.
See, I’m 51 years old and have tried to resurrect or preserve old software projects (being kind of a data and old-experience hoarder… I also have a 2 year old son and want to show him some of my early computing experiences “live”) and have encountered much failure and consternation because it turns out that old builds (and old software in general) makes MANY assumptions about their environment which simply slowly become invalid over time (thus breaking things)… which is when something like Nix proves its value (at least in theory), because its specific design intent is to encapsulate ALL environmental variants, which it terms “inputs”.
I suggest trying the effort of old software preservation/resurrection, and then coming back here, and you might then understand my perspective a little better. In the meantime, I assure you I agree with you!
It’s very nonstandard make (looks like probably gmake) which is honestly sometimes understandable, but a lot of it seems to be in service of all kinds of custom flags set by way of custom env vars rather than standard CFLAGS etc
It is a GNU-only makefile, but it does try to care about all the standard variables like CC, CFLAGS, etc. If you set those in your environment or with make CFLAGS=-Os they’ll get picked up.
Other than that it doesn’t really listen to env vars at all. Most of the code in the makefile is to
Handle platform-specific defaults like -lacl -lattr -lcap on Linux, but not other OSes
Handle convenience targets like make release, make ubsan, etc., so I’m not stuck typing make CFLAGS="-fsanitize=memory -fsanitize-memory-track-origins -fno-sanitize-recover=all" all the time
There are some marked disadvantages to this, which mostly boil down to trying to treat Make as a build system instead of a command runner (it is not, though you can try to fake it a bit as long as you limit yourself to GNU make specifically).
In particular, it doesn’t handle platform specific defaults at all well, or even correctly – you need a lot more than just passing -lacl depending on the platform (like checking whether these platform deps are installed. It seems reasonable to assume they are on Linux, because they usually are part of the base operating system, but they aren’t always e.g. on systems where coreutils isn’t the default). This is the main problem because currently there’s a lot of handholding builders need to do here if they deviate from those expectations.
Assumption that uname output is byte-for-byte reliable…
Odd, but not critical: setting custom CFLAGS gets rid of project-specific warning flags.
I noticed yesterday the use of Makefile instead of GNUmakefile – that’s fixed now, but it still errors out by default on BSD. :)
…
I do find it a bit fascinating that people reimplement ./configure but non-portably in pure GNUMake with a number of inflexible assumptions so often. This is what I, personally, would call the precise opposite of “just works”. I could offer some suggestions that don’t involve slow things like ./configure, though…
Snark aside, why is this a problem? If the author of the software can support everything they want to with the tools they use then that’s great. Can we move past this strange idea that using the most rudimentary and hard to use tools is the only true way to do anything? Nobody cares about Ultrix or Solaris or HP/UX or whatever anymore. The world has moved on.
100% agree. Yes, this is such a breeze. I somehow dislike the trend that one needs fat and complex toolchains like meson, ant, cmake to compile just 3 files of code.
These “modern version of old unix tool” projects are nowadays all written in Rust which is painful to compile. (I don’t care about Rust the language, but as an end user I don’t want yet another heavyweight toolchain to compile some small cli tool)
Not parent but I almost never download pre-compiled executables. If I can’t easily build it then I probably won’t be using it. And better find out sooner than later.
VMS and MVS are definitely not going to work with your Makefile; they’re not Unices (well, MVS can pretend). AIX, Solaris, etc. are going to cause heartburn as they do things you don’t expect, or differently from other Unices. Turns out ./configure has good reason for existing - special-casing #ifdef _AIX for weird quirks or missing headers gets unsustainable, fast. C23’s preprocessor header checks might make this better though.
then those OSs are not supported? Not everything in the world needs to be supported by all tools. If that is not the goal of the author why bother?
I don’t get this endless “oh but my obscure OS is not supported therefore your project is bad” talk. If you must use these OSes then you may not get all the shiny tools. Tough luck.
The point isn’t “you don’t support $OBSCURE_OS, so you’re a bad programmer”. The point is that it is the ubiquity of C compilers, and not the skill or capability of the author, that accounts for being able to download the source and compile it without additional steps. An unskilled and incapable programmer could also produce C code that can be downloaded and compiled anywhere that cc and make are installed.
The author may certainly be skilled and capable, but this would be because of the code they wrote, not because of their choice of language and toolchain.
That’s only to set some defaults. If you’re cross-compiling you can override them, e.g.
$ make CC=aarch64-unknown-freebsd13.2-clang OS=FreeBSD ARCH=aarch64
(I should probably add that to the docs.)
Hypothetically I could grab that info from $(CC) -dumpmachine so you’d only have to set CC, but those target tuples are way harder to parse than uname. And gcc -dumpmachinedoesn’t listen to -m32.
Yes, I’ve read the readme file after the compilation attempt. On the beginning I had thought to argument that a “full build toolchain” could pull the dependencies automatically, but then I’ve lost motivation to do any argumentation, and I’ve removed my comment. But you’ve replied earlier than I deleted it, so thanks I guess.
I find the wording quite misleading. There is no infection. They just use a standard feature that people used for a decade to restrict keys and run commands on remote hosts. Who would have guessed that you can do malicious things if you can run arbitrary commands…
yeah, it is not ideal wording. the benefit from an attacker’s perspective is that it’s a subtle way to get persistence after a successful intrusion (without needing a rootkit). I do think the post makes that fairly clear but the headline isn’t great.
This is the cambridge dictionary level definition of what is going on, and the de facto way ‘infection’ has been used in infosec since at least the days ‘viruses’ were being talked about (so mid eighties) and broadly for that matter (host, file, binary, registry, …) infected by (…) and also the same terminology EDR tools use to this day.
Piggybacking on standard features to hide in plain sight is very much a desired trait and a kind of misuse to definitely consider when introducing, as you put it, a ‘standard feature’. I did not know about this property of OSSH keyfiles and consider it an anti-feature big enough that I will absolutely patch it out on the few machines I still run OSSH on.
This one is great and definitely goes into both my red and blue teaming arsenals – even more-so now. High entropy blocks of data is suspect in ‘text files’ but absolutely expected in key files. Techniques published by the likes of THC, Phrack etc. very quickly become common practice.
Why would you call it poorly worded? It seems like a fairly level-headed assessment of OpenBSD’s security features. There’s praise and disapproval given based on the merits of each, comparing to other platforms as well.
If your takeaway from reading that website is a fairly level-headed assessment of anything then I’m not sure what to tell you. It’s my personal opinion that it’s anything but that.
The person who’s maintaining the website is one of the persons who’s doing the talk but not walking the walk, i.e. a blabbermouth.
Qualys on the other hand is actively trying to exploit the latest OpenSSH vulnerability and found some valid shortcomings in OpenBSD’s malloc. otto@ who wrote otto-malloc, acknowledged them and is already working on an improved version.
Programmers have a long and rich history with C, and that history has taught us many lessons. The chief lesson from that history must surely be that human beings, demonstrably, cannot write C code which is reliably safe over time. So I hope nobody says C is simple! It’s akin to assembly, appropriate as a compilation target, not as an implementation language except in extreme circumstances.
Which human beings?
Did history also teach us that operating a scalpel on human flesh cannot be done reliably safe over time?
Perhaps the lesson is that the barrier of entry for an engineering job was way higher 40 years ago. If you would admit surgeons to a hospital after a “become a gutt-slicer in four weeks” program, I don’t think I need to detail what the result would be.
There’s nothing wrong with C, just like there’s nothing wrong with a scalpel. We might have more appropriate tools for some of its typical applications, but iC s still a proven useful tool.
Those who think their security burns will be solved by a gimmick such as changing programming language, are in for a very unpleasant surprise.
Perhaps the lesson is that the barrier of entry for an engineering job was way higher 40 years ago
Given the number of memory safety bugs that have been found in 40-year-old code, I doubt it. The late ‘90s and early 2000s exposed a load of these bugs because this C code written by skilled engineers was exposed to a network full of malicious individuals for the first time. In the CHERI project, we’ve found memory safety bugs in code going back to the original UNIX releases. The idea that there was some mythical time in the past when programmers were real men who never introduced security bugs is just plain wrong. It’s also a weird attitude: a good work an doesn’t blame his tools because a good work an chooses good tools. Given a choice between a tool that can be easily operated to produce good results and one that, if used incredibly carefully, might achieve the same results, it’s not a sign of a good engineer to choose the latter.
Given the number of memory safety bugs that have been found in 40-year-old code, I doubt it.
Back then, the C programmers didn’t know about memory safety bugs and the kind of vulnerabilities we have since two decades. Similar, Javascript and HTML are surely two programming languages which are somewhat easier to write than C and doesn’t suffer from the same class of vulnerabilities. However, 20 years ago people wrote code in these two languages that suffer from XSS and other web based vulns. Heck, XSS and SQLi is still a thing nowadays.
What I like about C is that it forces the programmer to understand the OS below. Writing C without knowing about memory management, file descriptors, processes is doomed to fail. And this is what I miss today and maybe @pm in their comment hinted at. I conduct job interviews with people who consider themself senior and they only know the language and have little knowledge about the environment they’re working in.
Yes, and what we have now is a vast trove of projects written by very smart programmers, who do know the OS (and frequently work on it), and do know how CPUs work, and do know about memory safety problems, and yet still cannot avoid writing code that has bugs in it, and those bugs are subsequently exploitable.
Knowing how the hardware, OS (kernel and userspace), and programming language work is critical for safety or you will immediately screw up, rather than it being an eventual error.
People fail to understand that the prevalence of C/C++ and other memory unsafe languages has a massive performance cost: ASLR, Stack and heap canaries, etc and then in hardware: PAC, CFI, MTE, etc all have huge performance costs in modern hardware, are all necessary solely due to the need for the platform to mitigate the terrible safety of the code being run. That’s now all sunk cost of course: if you magically shifted all code today to something that was memory safe, the ASLR and various canaries costs would still be there - if you were super confident your OS could turn ASLR off, and you could compile canary free, but the underlying hardware is permanently stuck with those costs.
Forcing the programmer to understand the OS below could (and can) happen languages other than C. The main reason it doesn’t happen is that OS APIs, while being powerful, are also sharp objects that are easy to get wrong (I’ve fixed bugs in Janet at the OS/API level, I have a little experience there), so many languages that are higher level end up with wrappers that help encode assumptions that need to not be violated.
But, a lot of those low level functions are simply the bottom layer for userland code, rather than being The Best Possible Solution as such.
Not to say that low level APIs are necessarily bad, but given the stability requirements, they accumulate cruft.
The programmer and project that I have sometimes used as a point of comparison is more recent. I’m now about the same age that Richard Hipp was when he was doing his early work on SQLite. I admire him for writing SQLite from scratch in very portable C; the “from scratch” part enabled him to make it public domain, thus eliminating all (or at least most) legal barriers to adoption. And as I mentioned, it’s very portable, certainly more portable than Rust at this point (my current main open-source project is in Rust), though I suppose C++ comes pretty close.
Do you have any data on memory safety bugs in SQLite? I especially wonder how prone it was to memory safety bugs before TH3 was developed.
Did history also teach us that operating a scalpel on human flesh cannot be done reliably safe over time?
I think it did. It’s just that the alternative (not doing it) is generally much much worse.
There’s nothing wrong with C, just like there’s nothing wrong with a scalpel.
There is no alternative to the scalpel (well, except there is in many circumstances and we do use them). But there can be alternatives to C. And I say that as someone who chose to write a new cryptographic library 5 years ago in C, because that was the only way I could achieve the portability I wanted.
C does have quite a few problems, many of which could be solved with a pre-processor similar to CFront. The grammar isn’t truly context free, the syntax has a number of quirks we have since learned to steer clear from. switch falls though by default. Macros are textual instead of acting at the AST level. Everything is mutable by default. It is all too easy to read uninitialised memory. Cleanup could use some more automation, either with defer or destructors. Not sure about generics, but we need easy to use ones. There is enough undefined behaviour that we have to treat compilers like sentient adversaries now.
When used very carefully, with a stellar test suite and sanitisers all over the place, C is good enough for many things. It’s also the best I have in some circumstances. But it’s far from the end game even in its own turf. We can do better.
And I say that as someone who chose to write a new cryptographic library 5 years ago in C, because that was the only way I could achieve the portability I wanted.
I was wondering why the repo owner seemed so familiar!
Those who think their security burns will be solved by a gimmick such as changing programming language, are in for a very unpleasant surprise.
I don’t think that moving from a language that e.g. permits arbitrary pointer arithmetic, or memory copy operations without bounds checking, to a language that disallows these things by construction, can be reasonably characterized as a gimmick.
There’s nothing wrong with C, just like there’s nothing wrong with a scalpel.
This isn’t a great analogy, but let’s roll with it. I think it’s uncontroversial to say that neither C nor scalpels can be used at a macro scale without significant (and avoidable) negative outcomes. I don’t know if that means there is something wrong with them, but I do know that it means nobody should be reaching for them as a general or default way to solve a given problem. Relatively few problems of the human body demand a scalpel; relatively few problems in computation demand C.
What we would consider “modern” surgery had a low success rate, and a high straight up fatality rate.
If we are super generous, let’s say C is a scalpel. In that case we can look at the past and see a great many deaths were caused by people using a scalpel, long after it was established that there was a significant differences in morbidity when comparing a scalpel, to a sterilized scalpel.
What we have currently is a world where we have C (and similar), which will work significantly better than all the tools the preceded it, but is also very clearly less safe than any modern safe language.
Oof… what is the word I’m looking for here?..
Or… You could not. If you focus on EFI you could use an EFI stub to boot your kernel. Or boot a UKI directly. None of it needs systemd.
systemd-boot is only administratively part of systemd really (and it’s pretty thin!), you can use it with other init systems
Or… You could skip it.
sure! if you don’t need to choose an entry
Can you use systemd on alpine linux?
Not really. PostmarketOS has put in the work to get it functional, but AFAIK it’s not being upstreamed because Alpine is not interested.
See https://wiki.postmarketos.org/wiki/Systemd and relevant entries in the blog.
Embrace, extend…
This is dumb.
I don’t see any competitors to systemd that can run GNOME on the market.
https://wiki.gentoo.org/wiki/GNOME/GNOME_without_systemd/Gentoo
The main desktop environment for Chimera Linux is GNOME and it uses dinit at the init and service manager.
GNOME is available for all the BSDs and they have their own rc based init system.
Well…. Grub reimplements several filesystems (btrfs, xfs, ext* etcetcetc), image formats, encryption schemes and misc utilities.
A good example is how luks2 support in Grub doesn’t support things like different argon implementations.
Booting directly into an UKI/EFI binary is not particularly user-friendly for most users, which is the missing context of the quote. The statement isn’t about needing
systemd
, it’s about offering a replacement togrub
.I’d say it’s the opposite. An average user doesn’t need to pick a kernel to boot. They use the kernel provided by their distribution and don’t want to see any of the complexities before they can get to the browser/email/office suite/games. It’s us—nerds—want to build our own kernles and pick one on every boot. Supermajority of users don’t need multiple kernels. They don’t know what a kernel is, don’t need to know, and don’t want know. They don’t want to know the difference between a bootloader and a kernel. They don’t care to know what UKI even is. They’re happy if it works and doesn’t bother them with cryptic test on screen changing too fast to read even a word of it. From this perspective—average user’s point of view—UKI/EFI stub is pretty good as long as it works.
If you want to provide reliable systems for users you to provide rollback support for updates. For this to work you do need to have an intermediate loader. This is further complicated by the Secure Boot that needs to be supported on modern systems.
sd-boot
works just fine by skipping the menu entirely unless prompted. Ensuring that the common use-case does not need to interact, nor see, any menu at boot.Isn’t this built in to UEFI? EFI provides a boot order and a separate Next Boot entry. On upgrade system can set Next Boot to the new UKI and on successful boot into it system can permanently add it to the front of boot order list.
I’m not certain what you’re referring to here. If you say that this makes new kernel installation a bit more complicated than plopping a UKI built on CI into EFI partition than sure, but this is a distro issue, not UEFI/UKI issue. Either way it should be transparent for users, especially for the ones who don’t care about any of this.
UEFI is not terribly reliable and this is all bad UX. You really want a bootloader in between here.
This allows you to do a bit more fine grained fallback bootloading from the Boot Loader Specification which is implemented by
sd-boot
and partially implemented bygrub
.See: https://uapi-group.org/specifications/specs/boot_loader_specification/#boot-counting
UKIs are not going to be directly signed for Secure Boot, this is primarily handled by
shim
. This means that we are forced to be forced through a loop whereshim
is always booted first which then would bootyour-entry.efi
.However, this makes the entire boot mechanism of UEFI moot as you are always going to have the same entry on all boots.
I used to boot my machines with just the kernel as EFI stub and encountered so many issues with implementations that I switched to sd-boot.[^1]
Things such as:
I’m surprised any of them did anything more than boot “Bootmgfw.efi” since that seems to be all they’re tested with.
Fortunately, nowadays you can store the arguments in UKI (and Discoverable Partitions Specification eliminates many of those). The rest was still terrible UX if I got anywhere off the rails.
[^1]: Even on SBCs, Das U-Boot implements enough of EFI to use sd-boot, which is a far better experience than the standard way of booting and upgrading kernels on most SBCs.
I think you’re missing the number of users for whom the alternative is Windows, not a different kernel.
Yes, they could rely on their firmware to provide a menu for that, but they’re also pretty universally bad, and often much slower.
Personally I love grub and don’t ever see that changing har har
I love fish…except that it doesn’t have ctrl-O (execute line then select the next command from history) and I don’t know what the workaround is. I switched anyway, but every time I have to do something repeatedly that takes more than one command, I feel like I must be missing something. (Am I?) I have ctrl-R, blah, ctrl-O ctrl-O ctrl-O in my fingers from years of bash/zsh.
Wow, TIL. Thank you.
Using Unix for > 25 years now. Never heard of C-O before. Thanks!!
Execute which line? The current input? The once the executed comand terminate, what do you mean by select? Putting it in the promot input?
Why is this practical? Why do you want to runa command and then the one you have executed before?
Either way, sounds like something you can implement in 3 ton 5 lines of code.
I’ve never seen this before. Is this like !! ?
In bash/zsh, after you find a command in history with c-R, you can go up and down in history with c-P and c-N, and you can execute a command and immediately select the following command with c-O. So if you have a sequence of commands to repeat, you c-R back to the first one, then repeatedly do c-O to execute the sequence. (This is assuming emacs keybindings.)
Fuzzy history search in fish doesn’t seem to be related to the chronological history list. It just pulls out a single command with no context.
So I’m hoping that now c-R is incremental in fish that it can do the bash/zsh thing, but I haven’t looked at the beta yet.
…I really need to read more documentation. What a time saver I never heard of.
cross posting to say: I am one of today’s lucky 10000..
Ooooooh. Neat, yeah I can see the appeal.
I’m glad I’m not alone in this! It’s extremely rare to see someone mentioning this feature.
In zsh it’s also possible to select multiple completion candidates with Ctrl+o. I miss it even more than executing multiple history lines. There is an open issue about this, but it’s pretty much dead. https://github.com/fish-shell/fish-shell/issues/1898
OpenBSD isn’t even really supposed to be a desktop OS. I’d say it’s more like router firmware. I’m always shocked when someone actually implies they do or have been using it as a desktop OS.
And yes, I know there’s going to be someone who insists they also use it. I’ve also seen people try to use Windows XP x64 Edition well into 2014. Trust me, I have seen no shortage of questionable life choices.
The author of this was previously on the OpenBSD development team. OpenBSD devs tend to dogfood their own OS, so of course she would have used it as a desktop.
This isn’t really true. A few porters do huge amounts of work to keep (among other things) KDE and Chromium and Firefox available for OpenBSD users, and not insignificant work goes into making the base system work (more or less) on a decent variety of laptops. It’s less compatible than Linux but for a project with orders of magnitude less in resources than Linux it does pretty good. But I guess we’ve finally reached the Year of the Linux Desktop if we’re now being shocked that someone would have a BSD desktop.
Using a BSD isn’t weird. OpenBSD specifically is a curious choice, though.
Use it if you like it, don’t if you don’t.
I love curious choices though!
I use OpenBSD as desktop OS for the last 10 years.
Good that you tell me that it’s not supposed to be used as Desktop OS. Otherwise, I wouldn’t have noticed!
You jest, but the blog post legitimately contains a massive list of things the author found very useful in Linux that isn’t in OpenBSD.
almost as if different users have different needs
I would say that the vast majority of OpenBSD developers are using it as their primary OS on a desktop or laptop. I am shocked (well not really anymore, but saddened) that developers of other large mature operating systems don’t use it as their primary OS. If you’re not using it every day, how do you find the pain points and make sure it works well for others?
We have reasonably up-to-date packages of the entire Gnome, KDE, Xfce, Mate, and probably other smaller desktop environments. We have very up-to-date packages of Chrome and Firefox that we’ve hardened. The portable parts of Wayland have made (or are making) their way into the ports tree. None of this would be available if there weren’t a bunch of people using it on their desktop.
XXX isn’t really supposed to be YYY.
For your usage, my usage, a supposed general usage or one of my cat’s usage?
Be thankful that enough people made the “questionable life choice” to run Linux as a desktop OS in the 90s.
Why? It comes with an X server, an incredible array of software, both GUI and terminal based applications that I can install. For my needs OpenBSD is a very capable desktop, and more responsive and flexible then the Windows desktop that work gives me.
!!!
For comparison, ext3, with journaling, was merged into Linux mainline in 2001.
I developed embedded devices whose bootloader read and wrote ext4 files, including symlinks. There is really no excuse not to have a journaling file system on a system that’s larger than a fingernail.
For OpenBSD it may be as similar reason as why they got rid of Bluetooth. Nobody was maintaining that code. It got old, stale. So they got rid of it. I generally actually like this approach, sadly you lose functionality, but it keeps the entire codebase “clean” and maintained.
My guess is that they need people to work on the FS issue. Or, rather they don’t have anyone who cares enough about it onboard to actually write the code in a way that is conductive to OpenBSD’s “style”. Could be wrong, but that’s my assumption.
For comparison, FreeBSD merged soft updates (a variant of journaling) in the 1990’s, and in 2008 announced ZFS support.
OpenBSD had soft updates, but they recently pulled it.
The pragmatic thing to do would be to grab WAPBL from NetBSD since NetBSD and OpenBSD are still relative kin. Kirk McKusick still maintains UFS on FreeBSD so the SU+J works well there but it would be a lot of work to pull up OpenBSD’s UFS.
IIRC WAPBL still has some issues and is not enabled by default. Its primary purpose is not to make the filesystem more robust but rather to offer faster performance.
I’ve never experimented with SU+J, but I’d like to hear more feedback on it (:
I’m not sure how deep WABPL goes but a journal helps you to close some corruption events on an otherwise non-atomic FS by being more strict with the sync and flush events and without killing performance. You can also journal data which is an advantage of taking this approach, although I don’t know that WAPBL offers this currently. Empirically NetBSD UFS+WAPBL seems fairly reliable in my use.
SU+J orders operations in an ingenious way to avoid the same issues and make metadata atomic at the expense of code complexity, the J is just for pending unlinks which otherwise have to be garbage collected by fsck. A large, well known video streamer uses SU+J so it is well supported. Empirically SU+J is a little slower than other journaling filesystems but this might be as much implementation and not algorithm.
Thanks for the feedback. I’ve been wanting to check out SU+J for a while, you got me hyped to dig into the concepts and the code!
Re WABPL: an interesting thread on netbsd-tech-kern
Nice, I had not seen that thread. I think the data journaling they are discussing would be important for OpenBSD as for instance on FreeBSD UFS is used in specific scenarios like embedded devices or fail-in-place content servers and ZFS is used anywhere data integrity is paramount. WAPBL was created in response to the complexity of SU+J as it seems OpenBSD was also impacted by. For that and easier code sharing I would be inclined to go the WAPBL direction, but there may be other merits to the SU+J direction in terms of syncing FFS and UFS against FreeBSD.
OpenBSD FFS has soft updates since a very long time too.
Soft updates have been removed in Feb 2024: https://marc.info/?l=openbsd-cvs&m=171489385310956&w=2
This is really surprising. I thought this was one of those instances of OP “handling things wrong”, but actually it doesn’t seem OpenBSD natively supports anything else than FFS and FFS2 (and without soft updates, like above).
Anyone else wish we’d stop promoting C projects?
Why would we do that? You are always free to hide articles tagged with C.
I certainly don’t.
And why is that? Everyone is free to code in whatever they want.
I am a happy C coder until this day and posts like this shows that C is far from being dead.
C sucks, but until I have a working Rust compiler and the time to rewrite a few tens of thousands LoC in Rust, I’m still going to be interested in what’s going on in the C ecosystem.
There are some things that need to be written in a low-level language and so C or C++ (or assembly) remains the best choice. They implement core parts of the abstract machines of higher-level languages and so are intrinsically unsafe.
There are some things that were written in C a long time ago and so would be expensive to rewrite and, because they’ve had a lot of in-production testing, a rewrite would probably introduce a lot of bugs and need more testing.
And then there are new projects, doing purely application programming tasks, written in C. And these are the ones where I wish people would just stop. Even on resource-constrained systems, C++ is as fast as C and lets you build safe abstractions. For larger targets, there are a lot of better choices (often Rust is not one of them: don’t do application programming in a systems language).
No, not everything need to be Rust. There are plenty of fun you can have with C.
The fun you can have with C is the kind that leaves you wanting to take some strong antibiotics when you wake up because you don’t know what you caught.
The fun you can have with Rust is the kind that leaves you wanting to take painkillers because you had to listen to a fanboy ramble about memory and thread safety all night.
Everything was better in the old days. We should Make Programming Great Again. Where are the COBOL people when you need them?
I enjoy programming. It is still great in my opinion. Lot of good programming literature, languages and libraries are free and open source. What not to love.
😀
To clarify, I was being sarcastic! I definitely agree — we’ve come a long way in terms of language design.
Not nearly as much as the zero content “C iz bad” comments whenever anyone mentions C.
Eh, I think they’re cute. It’s a shame when C codebases are deployed in any serious context, but I’m always up for curiosity languages/libraries, especially for preserving older computer architectures. We gotta keep this stuff alive to remember where we came from.
That is certainly one of the takes of all time.
Hey, don’t shoot the messenger, I’m just echoing what a bunch of world governments and international corporations are saying.
Yeah yeah we’ve all seen the white house report. I can’t remember exactly how their guidance was worded, but I would assume it’s more along the lines of “whenever possible, in contexts where cybersecurity is a concern, prefer memory-safe languages such as Rust or Java over non-memory-safe languages such as C or C++” which is actually very reasonable and difficult to argue against.
I think there are contexts which qualify as “serious”, but where cybersecurity isn’t a concern. Also, IIRC there are some (quite esoteric, to be fair) soundness holes in Rust’s borrow checker (I think there’s a GitHub issue for one that’s been open for like 5 years and they haven’t decided on the best way to resolve it yet). Furthermore, rustc hasn’t been formally verified, and then of course there’s always
unsafe
code where memory safety bugs are still possible. So I think through use of sanitizer instrumentation, testing, fuzzing, etc. you can get a C project to the point where the likelihood of it having a memory safety bug is on par with what Rust can give you. Take SQLite for example—personally I would have as much faith in SQLite’s safety as I would in any other database implementation, regardless of language.I didn’t really intend to make this some kind of language debate, but if you insist:
So you mean… just use Rust. Because Rust is essentially C with all that stuff baked in. That’s kind of the point of it. It’s C with dedicated syntax for verification tools (the ones it’s built with) to verify it, along with some OCaml control flow structures to make code more streamlined. You really can just kind of think of Rust as being C++ 2 (Not C++++ because that’s C# (that’s the pun of the name, if you never knew)).
And I’m not sure why people are still doing this “well the compiler might have a bug or two and you can manually mark things as unsafe so that means its as insecure as C” reach, it really doesn’t make any sense and nobody is accepting that as a valid argument.
I really appreciate this. I am missing one important point for me, how will Ladybird look like from the the security perspective?
The web these days is just outright dangerous and browsers are used for nearly everything. ~awesomekling can you point to a design sheet/thoughts how the browser will be secured?
That’s some great timing, with this bug dropping literally the day after CentOS lost support.
It was also the day after FreeBSD 13.2 went EOL, but the FreeBSD Security Team decided to provide the back-port anyway (upgrading to 13.3 should be painless, since it has binary backwards compatibility, but people that don’t want to move can still get this update).
Redhat 7 is not affected, so CentOS 7 most likely as well not: https://access.redhat.com/security/cve/cve-2024-6387
Self hosted VPS from IONOS (ionos.com) in an UK datacenter. Apparently, the UK IP ranges have a good reputation and I never had any issues (compared to IP ranges from Hetzner which is a near sure guarantee to end up in Spam).,
I find it interesting that OpenBSD’s people believe in NOT letting services restart, assuming that a service will only crash if it’s under attack, and a stopped service can’t be exploited.
I wish the daemons I ran were so reliable that failures only happened when under attack.
Restarting a few times is often fine. There are basically three kinds of attacks:
The last category is interesting and is the basis for things like ASLR: you learn something on the way to the crash, but you can’t then use it after the restart. At least in theory, often there are holes in these things.
Things like the Blind ROP attack need a few thousand restarts on average to succeed. With a 10ms process creation time, that’s a few seconds. The stagefright vulnerability was a problem because Android had only 8 bits of entropy and automatically restarted the service, so just using a constant value for your guess and repeatedly trying would succeed more than half the time with 128 guesses and that took well under a second.
Exponential back off is often a good strategy for this kind of thing but if availability is not a key concern then stop and do some postmortem analysis is safer.
It’s also worth noting that, even when not actively attacked, reconnecting clients will often do the same thing repeatedly. If a particular sequence of operations leads to a crash, restarting may still crash immediately when the client that triggered the bug reconnects and does the same thing again.
That’s a nice philosophy, I’ve never had a daemon in OpenBSD base crash on me, basically ever.
Do you have any quotes or discussion that outlines that design philosophy though? Curious to see it from the source.
https://marc.info/?l=openbsd-misc&m=150786234512529&w=2
Please read the linked email from Theo again. He writes:
No mention of attacks.
Bugs can and will happen. If you auto-restart services you might miss a bug because the systems gives you the feeling everything is alright. I am a BIG fan of fail loud and fail hard. Then you can analyze the situation and fix it.
Moreover Theo says it that it is a bad default. If your use case calls for it, go ahead.
https://marc.info/?l=openbsd-misc&m=150786327012681&w=2
One of the reasons why it is a bad default is that it lets the attacker retry exploiting the bug in a short period of time.
https://marc.info/?l=openbsd-misc&m=150795572208356&w=2
Mention of attacks. Thank you for the further links from the rest of the thread.
OpenBSD developers also like to say that the only difference between a bug and a vulnerability is the intelligence of the attacker.
Unbelievable that there are @$%#@$# out there who invest time and money to bring down a code forge!
It would be very interesting who is behind this. These attacks cost money and to my understanding there is not much to gain from them.
I don’t believe that it’s a given that these attacks cost money.
Granted, my experience dates back to the mid-00’s (2003-2004) but the majority of DDoS’rs were skiddies with bot-farms.
In fact, I was part of a group of people who would disassemble malware submissions (since those used to be publicly downloadable) and take over botnets via their C&C (usually IRC, sadly). For me (and those people) running the botnet itself was free.. it’s all other peoples machines.
(I should note that I never did anything nefarious after taking over networks, usually we did this to piss off botmasters so we self-destructed the botnet most times)
Well the world has changed, in todays world you rent a botfarm for some time from a third party.
I think you’re looking at the supply side, rather than the demand side: I think that these days, the resources associated with botnets like these are worth lots of money, and most of the people who assemble them know this and charge handsomely for their use.
They can be used as a flex/demonstration of capabilities to sell to others.
I hope they file criminal charges.
Against who, precisely?
The internet; ergo Al Gore, probably
thank you for asking, that’s exactly the job of the investigating authorities.
Sure, but in practice internet crimes like this tend to go uninvestigated by law enforcement; in the absence of either serious real-world consequences or an incredibly rich and powerful victim it’s probably not on anyone’s radar.
yes, but we should start taking such serious and treat crimes as such. If nobody demands investigation, the authorities slack, but shouldn’t. Who should start caring, if not we?
In past incidents like this at other infrastructure companies it turned out that some specific project was hosting content that some political group found objectionable. So they just turned the DDOS firehose on the whole infrastructure.
A specific example: in 2015 the Chinese government attacked Github with their “Great Cannon”, apparently because they didn’t like a couple of firewall circumvention tools hosted there.
The 2016 Mirai attacks were meant to disrupt a few minecraft servers and took down the rest of the internet by accident.
Two code forges, even.
Quite right. Doesn’t make it better…
This should be the first argument of this post. Listing that at the end is like apologizing for the rest of the post, like “no no, don’t be afraid!”. Using Firefox is not hardcore activism, there’s no big tradeoff here. Actually most users should feel no difference after switching, except they won’t have their Google avatar constantly reminding them their personal data is being sucked up. Once it’s settled that Firefox is as good as its competitors, the privacy and free-software arguments would be even stronger IMO, because it would seem crazy not to give credit to the awesome piece of free software Firefox is.
Firefox address bar works worse than Chrome’s for instance – Chrome tries to autocomplete entire URLs that are used often, Firefox does that one path segment at a time, which is not what most people want, I assume. Password autofill on Android often doesn’t work. On the desktop, some UI elements are disproportionately large to how often they’re used. Chrome’s are more optimized. Firefox is a wonderful project which I’m thankful for it but lacks some polish.
Oh, that is actually a feature that might make me switch to Firefox! Safari has the same behaviour as Chrome here and it’s really annoying. If I want to go to github.com/org/project, I don’t want it to autocomplete the four GitHub pages I’ve visited most recently, I want it to autocomplete github.com/ and then let me type the character of the org, tab, first characters of the project, tab, and get to the right place. I end up either typing the full URL or bouncing via a search engine in Safari because of this.
Last time I checked, on macOS it also didn’t use the Keychain (Chrome now can, including for Passkeys). I trust the macOS keychain a lot more than I trust Firefox to be secure. It has some very fine-grained ACLs for which applications can access which passwords.
Amusingly, the famous Chrome comic specifically advertises not having this behavior!
Your trust (or distrust) is well placed. Firefox passwords are encrypted with a key derived from a master password, but if the user doesn’t explicitly set one, it’s the empty string. More to the point, it’s a known constant, so anyone with knowledge of the format can reconstruct the key and decrypt the credentials.
There are several open source tools that do this, and I managed to write one myself.
Oof. I wrote one too. I thought they were going to make the default randomly generated at profile creation time. Have they not done that yet?
(I do use Firefox, but only allow Bitwarden to save passwords, not any of the firefox built-in stuff, for my personal browsing these days.)
I have no idea; my focus was on recovering my own credentials from years ago.
But it wouldn’t matter. You already have a randomly generated salt. If you’re not asking the user to provide the master key, then you have to store it on disk in cleartext. Randomizing the key is just adding more salt.
I don’t recall the randomized salt being used for the key. I thought my program worked without using it, on any nss databases that weren’t password protected…
It’s been a while though. And the thrust of your original comment is obviously correct. I was just surprised because I thought I remembered that this was going to be changed in a way that would make it very modestly better.
The feature I still miss from Firefox since I stoped used it 15 years ago is to able to type “git proj” and it would show the beat match on the completion list. I find the multi step workflow you mention too slow.
Chrome never matched this fuzzy matching behaviour.
Firefox does support this, some settings may influence this. I just tried ’git and the first URL suggested is the proper one.
Maybe some “special” settings I have done related to search, I have only enabled “Provide search suggestions” but have disabled all 3 options below in about:preferences#search and in about:preferences#privacy I have disabled “Search engines”. So when I type something into the URL bar I only get URLs from open tabs, history and bookmarks.
To also narrow down your search to e.g. bookmarks only, type a * first and then your search term. There is also % for open tabs or ^ for history. Settings including additional installed search engines are in about:preferences#search below at “Search Shortcuts”.
I spent a little while puzzling over this, then I guessed: “best match”?
What does “best” mean in this context? I don’t care much about this but several people are comparing Chrome autocomplete and Firefox autocomplete, and I never even noticed a difference in something like 28 years on Mozilla browsers. Can you explain what “the best match” means?
This was a huge deal when it was launched. Firefox awesome bar, they used to call it.
I didn’t explain well… But, you could type words separated by spaces and it would show results that matched those words on both the URL and the title of previously visited pages. I have just checked, they removed matching in the title and don’t even show it anymore…. They are pretty much a chrome copycat at this point.
Here is video showing it. Notive that the tittle doesn’t show up anymore nowadays. Which is ironic, given google is trying hard to push users into no being able to interact with urls directly. https://www.youtube.com/watch?v=U7stmWKvk64
You can configure Firefox Password Autofill to use a third-party app (at least on mobile: I use my Nextcloud Password app, which is probably less secure than macOS keychain but is probably secure enough, and it works like a charm).
Right, I’ll eventually have to set up a VaultWarden instance. What’s stopping me is I haven’t fully figured out how to do LetsEncrypt TLS on my private server without a DNS domain. It is possible but requires some tinkering.
They have full autocomplete: browser.urlbar.autoFill.adaptiveHistory.enabled. Not sure why they’re holding back making it a default, or even adding it in the settings.
It was on by default on Nightly initially, but there were a lot of complaints (including mine) so it was disabled, I don’t know if it was ever re-enabled by default. It looks like there were several cycles of bug fixing.
Nice! I’ll try that out, thanks!
Ah yeah, there’s no about:config on Firefox Android.
Does Chrome still do the thing where URL bar autocompletion includes ads for companies? I remember a couple years ago back when I was using Chrome I typed “j” to go to a bookmark I had created that was literally named “j” and that I visit multiple times a day, but this high-relevance, exact match got ranked lower than “JC Penny”, which I have never once searched for or visited.
It wasn’t even an ad for a product, just a “hey, we thought you might like to search for this corporation?” Utterly baffling.
Was your default search engine Google? I’ve never experienced this behavior but I’ve had it as DDG for ages.
My default search engine is Google and I don’t remember seeing this in Chrome either.
This is something I have never thought about, so I don’t have a strong opinion on whether Firefox’s or Chrome’s behaviour here is better… but come on. That so incredibly minor. Surely nobody is choosing Chrome over Firefox because the URL bar autocomplete functions ever so slightly different on Firefox. It’s not like Firefox doesn’t suggest entire URLs, you just may need to hit the down arrow to select it slightly more frequently than with Chrome. And when you’re viewing an actual web page, the only two things taking up vertical space is the tab bar and the URL/search/plug-ins/navigation bar, just as with Chrome.
I know folks who have gone back to chrome because of slight differences in the omnibar behaviour. I think they liked that they could easily search a site from the omnibar or something?
I use both regularly, and never paid much attention. After A/B testing, I slightly prefer Firefox. It turns out this is a Firefox config setting.
There’s no about:config on Firefox Android unfortunately.
I have about:config on Android. I’ve got two Firefox based browsers installed on Android, both from the F-droid app store. They are Fennec and Mull, both have about:config.
Unfortunately, neither one has that specific config key.
There is. Just press 5 times on the Firefox logo in the About dialog. This enables debug options including about:config
In the stable version? Not according to my testing and https://connect.mozilla.org/t5/ideas/firefox-for-android-about-config/idi-p/8071/page/4#comments
I haven’t noticed the segment behavior. However, I am required to use chrome for work and its address bar works SO MUCH worse than firefox’s, especially in terms of autocompleting from browser history - chrome can never find things that I was browsing even a few history entries ago.
Does chrome’s do unit conversion and calculations? Behind a flag in Firefox for some reason but I use it all the time.
Google does that. Local is better for simple things ofc but e.g. for currency conversion doing this online makes sense.
Is this going to be a meme for ’24?
Really actually I think Firefox is not actually really worse than the competitor, but the phrase “actually really good” indicates that if I’m not already actually skeptical, I really actually should be before considering Firefox to be good.
I have tried to switch to Firefox many times over the years, but running on Ubuntu, the Firefox UI feels noticably clunkier and less polished than Chrome. Too much unused whitespace and padding everywhere (the URL box, for example, doesn’t stretch to fill the horizontal space). An inconsistent mix of different font sizes and line heights. It very much has that awkward GTK / Java Swing feel to it, and I find it distracting. I would love to see them address some of these design issues, as it’s the main reason I haven’t switched yet.
is it really the most serious reason? If you align with the general article sentiment that “FF is the only remaining ethical web browser” and that you want to make-the-world-a-better-place©, then this seems like an easy enough personal drawback to live with. It’s not like you had to fight nazi groups in the streets.
Have you not removed those “horizontal flexible spaces” on both sides of the URL box? Customize toolbars to drag & drop the things you want there and not. I always remove those spaces and add two icons for plugins I actively use (Multi-Account Containers and Undo Close Tab). Result: I can type a 3400 pixel long URL – 89% the width of my 4K screen.
The creator of SponsorBlock also made DeArrow, a community-driven Clickbait remover.
On stock Gnome Shell (not Ubuntu), I find this theme and tweaks to match the desktop environment much better than Firefox’s default theme.
The OpenSSH devs put this a little bit into perspective: https://www.openssh.com/releasenotes.html#9.6p1
I disagree with the comments about rewriting in Rust. Instead of creating a separate project / branch that is maintained separately for a while, they should be incrementally rewriting functions and structures in Rust. If I were looking at my project and 40-80% of CVEs were directly attributable to a tool or pattern I was using, I would hope that I have the insight to say “this tool sucks”.
Hasn’t this been the goal for decades? How do we actually make it harder to do wrong? Rust’s answer to this question sounds infinitely more appealing than C or C++ (including with all the linters).
I think you and the author are more in agreement than you think. He also says they are not going to rewrite it from scratch, and their “this is what happens” points 1. and 2. are exactly “incrementally rewriting functions and structures in Rust”:
I think it’d be better to say they could be, not “they should be.” As he notes in the post, the current developers aren’t experts on rust or the best folks to lead such development. Also:
“The rewrite-it-in-rust mantra is mostly repeated by rust fans and people who think this is an easy answer to fixing the share of security problems that are due to C mistakes. Typically, the kind who has no desire or plans to participate in said venture.”
If you aren’t volunteering to do the work, it’s best to hypothesize about what a project could do and not what it should do. If you are volunteering to do the work, then have at it and let us know how it goes following your proposal.
I think everyone agrees that memory safety is better than not, but the author highlights that (1) Rust isn’t supported on all targets and (2) they would likely need a new team of maintainers with the experience and interest to rewrite in Rust and also the dedication of the existing team. In particular, the article notes how many of the “rewrite it in Rust” proponents are themselves willing to step up to the plate.
I think a more realistic approach would be to write a new library in Rust, and give it a curl-compatible API.
I would say that’s a lot less realistic, curl is a huge piece of software with a large API footprint. And furthermore even less useful: curl is also a massive brand and wide spread utility, its users are unlikely to switch, especially indirect ones (e.g. users of curl because for all intents and purposes it is php’s http client).
Finally, Jose Maria Quintero’s effort on librsvg has conclusively demonstrated that you can progressively migrate a codebase in flight.
Do you mean Federico Mena Quintero? As far as I know, he led the migration effort on librsvg.
Good god, yes indeed I do, I can’t believe I mangled his name so.
I’ve migrated gradually a few codebases, and even wrote a C-to-Rust transpiler, and I don’t think it’s worth converting codebases this way.
This is because C has its own way of architecting programs, and has its own design patterns, which are different than how a Rust-first program would be structured. When you have a C program you don’t even realize how much of it is “C-isms”, but after a gradual conversion you get a very underwhelming codebase with “code smells” from C, which is then merely a first step of a major refactoring to be a Rust-like software.
Rust’s benefits come from more than just the borrow checker. Rust has its own design patterns, many of which are unique to Rust, and often not as flexible as an arbitrary C program, so they’re tricky to retrofit into a C-shaped codebase. A good C program is not a good Rust program.
I wonder how much curl is actually used now. When I first used it, it handled a load of different URL schemes and protocols and reimplementing that was far too much effort. These days, I rarely see it used for anything other than HTTP. I’ve used libfetch instead of libcurl because it’s a tiny fraction of the size and, even then, does more than I need.
The one sentence everyone who thinks we should rewrite everything in Rust shall take away is the following:
Starting a rewrite is easy. Maintaining a rewrite over nearly two decades so that every car/IoT/whatever vendor includes, is the hard part! It doesn’t matter which language the rewrite is in.
Let’s see. 15 years ago was 2008.
NodeJS was first released in 2009, Python 3.0 came out in December 2008 and Go initially came out in 2009, so a LTS version of an OS from 15 years ago wouldn’t have any of those. Fine, you wouldn’t need them. But a lot of software has started to require those. This might mean that with a security fix on a new version of some software, you might not even be able to build that new version on those old systems, to compare behaviour in a backport, for example.
People already make fun of Debian for being outdated when it comes out, and its regular releases are about 2 years apart. The crazy fast pace of the software industry makes it madness to support anything for more than a few years.
We’ve all gone collectively insane, and the LTS terms just reflect that fact.
More importantly:
This isn’t just software doing new things gratuitously. New hardware has to conform to constraints inherited from physics and has required new software models at various layers in the stack. If you are happy with what an old computer could do, you don’t need to adapt to these changes, but if you want to be able to take advantage of new functionality then you need to change how the software works.
New attack techniques have required new and different defences. It’s been military doctrine for a hundred years (except, famously, for the French, who subsequently learned) that static defences don’t work. The same is true for computer security: even formal verification can only make you immune to known categories of vulnerability attackers come up with new ones.
That said, it’s also interesting to see how much this progress has slowed down. 15 years before 2008 took you back to 1993. In that period:
The Multimedia PC standards defined minimum levels from that era. MPC2 was released in 1993 and required a 25 MHz 486, 4 MiB of RAM, and a display that could do 640x480 in 16 colours. The delta between that and MPC3 in 1996 involves more than doubling the specs of most components.
On the other hand, none of the deep learning stuff existed 15 years ago and that’s probably going to involve some interesting shifts in both hardware and software, as things like computer vision and natural-language interfaces become components of consumer systems.
[1] In terms of building planning as well. The William Gates Building in Cambridge was designed on the assumption that every computer scientist would have two CRT monitors on their desk, plus a desktop, so would be generating around 500+W of waste heat. The heating system in the building was specified with that assumption and needed some serious upgrading when the typical usage became a laptop plus an external TFT display.
Kilburn Building at the University of Manchester also has wacky architecture and heating because it was designed expecting to house very different kinds of computers and labs. To my knowledge they have never got the heating to work right.
I’m fairly sure 802.11g was prevalent in 2008 but I might be remembering my experiences wrong, and they might not reflect “most”
Edit to add: parallel Intel CPUs were available if not common back in the 90s; before the core 2 was the core, and there was a quad model; before both of those we had hyperthreading. So thinking in terms of multiple threads was well established.
Awesome response, thanks for that! It’s amazing to see how developments in technology as a whole mostly slowed down as you point out (the improvements from 1993 to 2008 are a lot more impressive than from 2008 to 2023), while at the same time the churn of software seems to have sped up.
I’m not sure the churn has sped up. In the 1993-2008 period, we saw a whole bunch of software changes:
In 2008, I wrote a book about Cocoa programming and most of the APIs are the same. SwiftUI is new, but I can write an Objective-C app using it as a reference and it will still act and feel like a modern macOS app. Win32 is mostly the same (though that’s not a good thing - it’s still mostly as bad as it was 30 years ago). POSIX2008 introduced a bunch of new things (including xlocale) and anything written for POSIX2008 is likely to work well on Linux/*BSD/macOS. HTML5 was released in 2008 and is mostly the same - there are new JavaScript APIs, but an HTML5 thing from 2008 will work the same.
It’s easy to cherry pick examples to argue in either direction.
Perhaps I’m thinking more of breakage - in olden days it seems backwards compatibility was more important, whereas it seems nowadays you are expected to keep up and rewrite your application all the time (think frameworks). Although I also remember endless fiddling with DirectX because some games required an older version while others required a newer one, but that could’ve just been DLL hell.
I think that’s very culture dependent. For some of the open source projects that I’ve been involved in, it’s been really important, for example:
In contrast, for some it’s been totally unimportant:
The latter seems to be an increasingly prevalent mindset. I used to blame Google for this. They have an in-house monorepo and cloud-scale refactoring tools, so it’s easy for them to change an API and then refactor all users of it for all of their own code. This attitude leaks into external projects where they aren’t in a closed world but still act as if fixing every consumer of an API is trivial and ignore the pain that this causes everyone downstream. Chrome and Android are both notorious for introducing new APIs and then removing them a year later.
Most F/OSS desktop environments never quite managed to build good APIs to start with and so kept fiddling with them. I really wish the GNOME team had picked GNUstep instead of GTK. At the time, they were similar levels of maturity, but GNUstep had APIs that stood the test of time, GTK had ones that needed redesigning twice and are still not great. If GNUstep had had the same investment as GTK, open source DEs would be significantly better than proprietary ones by now.
API design is less valued now than it was, in part because updates are easier. When I first installed Linux, I had to buy a CD and have it shipped to me. Before that, UNIX was distributed on tapes, Visual Studio was sold on floppy disks. Propagating an API change to consumer was a multi-year endeavour and so came with a huge cost if you build APIs wrong to start with: you’d have to support them for at least two years to avoid breaking brand-new software. Now, you can just push a new revision to git and tell everyone to update their submodules and fix the compile errors.
This! Staying with OP’s analogy of a hammer: We’re so insane that we invent a new hammer every two weeks. In the end it can only be used for hammering, however, it is now available in a dozen different colors, it was redesigned about 10 times since the original one wasn’t good enough and the manufacturing process was changed at least 2 dozen times since the old one didn’t use the right tools[tm].
Now we have hundreds of different hammers, some small, some big, some half-broken, some still in development stage and do more or less the same thing. And of course, all of them are in use somewhere in the world. Good luck finding someone providing security updates for 15 years for all of them.
i guess this is about computer that control medical devices.. where none of that are relevant..
But on the other hand those medical devices require certification that cost a huge amount of money.. and that if you do any major changes in the software after you need to get a new certification that will costing a new huge amount of money…
And those medical devices are very expensive because of that and thus hospitals will keep then in use for a long time.. so they need to keep the computer that interface with then around for many years with no major changes in software..
I’m not a huge advocate of LTS (it has its uses) but I don’t agree with these arguments. First of all you’re actually counting the 15 years and not the more realistic target of replacing after 12-13y with a bit of safety margin. Then most of my servers are just like that, but I’m pretty sure if this distro existed I could run a moderately up to date version of dovecot and postfix on it. Or maybe even have Docker.
That’s the horrible thing. I can want a long-lasting LTS for some applications and I can condemn the users of that thing if I want to deploy software that was made in the last n (< LTS) years ;)
Do you think something written in Node 0.1 could still run in today’s version of Node? At least with Go, I have a bit more faith that its APIs are stable. And like I said in my post, initial versions of Python 3 had lots of issues that have been fixed over time.
And safety margin, don’t make me laugh - people are still running pre-oldstable Debian servers that have long gone past their supported status. In companies, there’s very often a “if it works, do not touch it” policy. Especially if the person who set the thing up has left the company.
If it’s just a build dependency then you could build on a newer system, though of course you’d need the built binary not to have runtime dependency on anything in the new system, which is a pain to do for some build systems/languages.
I like that some languages do their configuration and build scripts in their own language. If you have that, modern dependency management, and your dependencies are clean then you can easily make fairly reproducible and portable builds with no dependency on the host system. Erlang family languages, Rust and Julia can all do this.
The typical IT person can have a life long career without ever even hearing about OpenBSD. One MUST know nothing about it. Get out of your bubble some more!
A typical person in any occupation can have a life long career without ever knowing about a useful tool. It’s fine to say ‘OpenBSD does not solve any problems I have,’ but being ignorant of it means that you will never even evaluate it.
A lot of IT professionals had happy careers in the late ‘90s and early 2000s, costing their employers huge amounts of money in license fees and compliance, because they knew about Windows and didn’t feel the need to learn about Linux.
Many IT professionals continue to know a lot about Windows and nothing about Linux.
At work I notice a lot of clients trying to get rid of their Linux infrastructure. Usually the reasoning seems to be they have people that can admin Windows or i, but not Linux. It seems hiring a Linux person would be easier, but to them consolidation looks that way instead…
It kind of makes sense, doesn’t it? You probably have Windows desktops anyway, so you need staff to do the admin of those, why not make them also maintain the servers? And then of course, corporate software tends to work better with other software of the same supplier, so a Windows server (with RDP, Exchange and what have you) works better for the desktop users due to “integration”. Nevermind the vendor lock-in and exorbitant charges etc etc etc.
These people are paying IBM a lot the systems that actually power the mission-critical workload. MS or RH licensing is chump change compared to that.
Oh come on, if you’re excited about something it’s fine to generalize it to “everyone needs to know”. No need to take this literally.
The project develops tools that are used by the whole f*cking IT industry worldwide. Such as OpenSSH, LibreSSL or certain libc components that are included in every Android smartphone. Get out of your bubble some more!
I know all that, yet it is somewhat useless information. Is it useful to know how to use OpenSSH? Sure thing, to many people in IT it is. Is it important to know who wrote it? No, it is not. It is irrelevant. Do you remember by heart who wrote all the tools you use daily? I don’t. It is not important to get anything done nor to master these tools.
Do you think the average developer knows who invented git or bash or who wrote gcc or nginx or postgres or react or kubernetes or whatever else? No, they don’t. They don’t care because they do not need to have this information to succeed.
We can all be grateful that these projects exist yet that is not the point. The point is that you don’t need to have intricate knowledge of how they come into existance. You can just use them without it and that is fine.
This is basically the same comment you made a year ago.
Is this some sort of performance art, or are you just a grumpy old git who repeats themselves? (Not an attack: I am very definitely the latter myself.)
I think that both times you’ve totally missed the point of this article.
Speaking as a writer, that probably means you haven’t read it. I have lost count of the number of times, in the last year-and-a-half as a daily-published writer online, that I get angry comments from people who manifestly have not read the article. Often then they claim that they have, which directly means that they have the reading skills of a 5 or 6 year old… something I find more plausible than their angry claims of comprehension.
What this article says, which you failed to notice both times, is:
If you work in IT, you should know about OpenBSD;
That means: if you know about, you will be better off;
That means 2 things:
You are using it – as in, you use OpenBSD code – and you ought to know that and be grateful;
You can very probably use the OS yourself and benefit from it;
It’s simple, it’s clear, it’s on-point, and there’s really not much to disagree with here.
“People in IT ought to know about it. If they don’t, here is why they should.”
That’s it. It’s not really something amenable to angry denouncement.
This story is a repost, so I don’t see why that’s okay but it’s somehow much worse for @fs111 to have the same opinion about it that they did last time.
Meh. I think I work in IT (whatever that means), I know enough about OpenBSD, and I don’t feel like those two facts have anything to do with each other. OpenSSH is great*, sure, but so is lots of other software people rely on all the time. They don’t need to know who wrote that. The provenance of one of many tools an IT person might use is just not important enough to make anyone better off.
As far as using OpenBSD itself, the article mostly just reads like a personal journey of OpenBSD discovery from one (now) true believer, which is great for them, but it’s not very interesting to me. Cool, it made your third laptop unbootable and you fixed it by disabling some of the laptop’s features and building a custom kernel. If I want to spend my free time breaking my stuff and then fixing it again, I can manage that without OpenBSD.
The pf example seems the most convincing to me (I’m not a fan personally, but trying to put myself in the shoes of someone who’s never heard of it), but it’s one of many in a list of examples that generally seem pretty weak. (For example, as a recovering mail admin, I’m not very impressed by “faster than exim + spamassassin”).
I think this sort of title is par for the course and in a world with plenty of overt clickbait I wouldn’t personally have bothered to complain about it. But it is a silly title, IMO: it suggests the article is going to give the reader some reasons to care about OpenBSD but the content seems more suited to existing OpenBSD fans.
* but it has a lot of the attributes that OpenBSD hates when it’s any other software—namely huge confusing config, code that’s only legitimately used in weird configurations, and features that are hard to implement securely
I am commenting because I am disagreeing with the authorative title in conjunction with the content presented under that title. Judging from the upvotes I seem to be not the only one.
If the article would have been called “OpenBSD - the story so far” or “What I like about the OpenBSD project” I would have not written the comment.
I think you’re missing the point, of the piece and of the title, and deliberately being hostile and confrontational about it, and I don’t know why.
(Aside: OpenBSD has Theo de Raadt and as such has no need at all of more people to be hostile and confrontational. ;-) )
What your proposed titles mean are:
“I like X and here is why”.
What the actual title means is:
“You already use tools that come from X, and so here is why X itself could be useful to you and save you money.”
That is not the same. It is not being controversial or challenging; it is saying “here is useful knowledge you might not have”. Apparently you take that as an affront, and I call this out both because it seems to me that you are grossly overreacting, but that you have been grossly overreacting for a year now and you have not assimilated the knowledge you were offered then and still have not.
If someone gets angry when asked “Hey, did you know X?” in 2022, then it is an odd and unreasonable response for them to get angry that they don’t know it. But if they are asked again a year later, and they are still angry, then that moves beyond odd and into borderline irrational.
There is something refreshing about doing git clone followed by make and it just works. No complicated toolchains to install, no endless build times. Wonderful!
I (usually) have this experience with repos that have a flake.nix file and a direnv trigger. But, that requires Nix (and optionally, direnv) to be available. In theory, it would work for every possible software project, though. And forever, which is the key… The fact that this one “happened” to compile, today, on your machine, doesn’t mean it wouldn’t break in the future, or on some other machine or OS, once the environmental assumptions are no longer met.
Is it really too much to be happy that a tool just compiles and a user is happy about that? Yes the code is not written for the year 2068 or 2349 or whatever and I don’t care. I saw a tool here did a git clone && make and it worked and that made me happy. I don’t care about the magical nix OS or direnv (great tool, but besides the point). I don’t care about this working on a Unix that has been abandoned 15 years ago or some hardware platform that was fresh before the Berlin Wall fell. It is irrelevant to me. It worked for me. Had it not worked then I would have not called the POSIX police but probably shrugged and moved on with my life.
I don’t see how what I said is mutually-exclusive with what you said.
I definitely appreciate when something “just works;” I’m also acutely aware of how lucky/“brittle” such a situation is.
See, I’m 51 years old and have tried to resurrect or preserve old software projects (being kind of a data and old-experience hoarder… I also have a 2 year old son and want to show him some of my early computing experiences “live”) and have encountered much failure and consternation because it turns out that old builds (and old software in general) makes MANY assumptions about their environment which simply slowly become invalid over time (thus breaking things)… which is when something like Nix proves its value (at least in theory), because its specific design intent is to encapsulate ALL environmental variants, which it terms “inputs”.
I suggest trying the effort of old software preservation/resurrection, and then coming back here, and you might then understand my perspective a little better. In the meantime, I assure you I agree with you!
It’s very nonstandard make (looks like probably gmake) which is honestly sometimes understandable, but a lot of it seems to be in service of all kinds of custom flags set by way of custom env vars rather than standard CFLAGS etc
It is a GNU-only makefile, but it does try to care about all the standard variables like
CC
,CFLAGS
, etc. If you set those in your environment or withmake CFLAGS=-Os
they’ll get picked up.Other than that it doesn’t really listen to env vars at all. Most of the code in the makefile is to
-lacl -lattr -lcap
on Linux, but not other OSesmake release
,make ubsan
, etc., so I’m not stuck typingmake CFLAGS="-fsanitize=memory -fsanitize-memory-track-origins -fno-sanitize-recover=all"
all the timeThere are some marked disadvantages to this, which mostly boil down to trying to treat Make as a build system instead of a command runner (it is not, though you can try to fake it a bit as long as you limit yourself to GNU make specifically).
In particular, it doesn’t handle platform specific defaults at all well, or even correctly – you need a lot more than just passing -lacl depending on the platform (like checking whether these platform deps are installed. It seems reasonable to assume they are on Linux, because they usually are part of the base operating system, but they aren’t always e.g. on systems where coreutils isn’t the default). This is the main problem because currently there’s a lot of handholding builders need to do here if they deviate from those expectations.
Assumption that
uname
output is byte-for-byte reliable…Odd, but not critical: setting custom CFLAGS gets rid of project-specific warning flags.
I noticed yesterday the use of Makefile instead of GNUmakefile – that’s fixed now, but it still errors out by default on BSD. :)
…
I do find it a bit fascinating that people reimplement
./configure
but non-portably in pure GNUMake with a number of inflexible assumptions so often. This is what I, personally, would call the precise opposite of “just works”. I could offer some suggestions that don’t involve slow things like./configure
, though…(Reliance on __has_include is quite nice though.)
Oh no, someone call the POSIX police!
Snark aside, why is this a problem? If the author of the software can support everything they want to with the tools they use then that’s great. Can we move past this strange idea that using the most rudimentary and hard to use tools is the only true way to do anything? Nobody cares about Ultrix or Solaris or HP/UX or whatever anymore. The world has moved on.
Today you’re the one dropping support of legacy systems used by others, tomorrow someone will drop a legacy system used by you.
and then I adapt, like humans always do. Sorry, but I am not buying it.
Well, so why can’t you adapt to more complicated build systems, since they’re clearly needed by the majority of people?
100% agree. Yes, this is such a breeze. I somehow dislike the trend that one needs fat and complex toolchains like meson, ant, cmake to compile just 3 files of code.
These “modern version of old unix tool” projects are nowadays all written in Rust which is painful to compile. (I don’t care about Rust the language, but as an end user I don’t want yet another heavyweight toolchain to compile some small cli tool)
Don’t you just get it compiled?
Not parent but I almost never download pre-compiled executables. If I can’t easily build it then I probably won’t be using it. And better find out sooner than later.
I almost never install software that needs to be compiled, so there’s that.
They are often easier to install user-local via cargo, which compiles from source (at least by default) IME.
I’d imagine that the market of people having cargo, but no rust toolchain installed rather small?
This is a sign of a skilled capable software engineer.
It’s a sign that I’ve wasted too much of my life waiting for
./configure
to finish, and I don’t want to inflict that pain on anyone else anymore.This is a sign of
a skilled capable software engineermost computers having a C compiler installed by default.Wait until the people with the weird OSes start showing up - we’ll see how fast a simple Makefile meets reality.
AIX, OpenVMS, and z/OS come immediately to mind.
VMS and MVS are definitely not going to work with your Makefile; they’re not Unices (well, MVS can pretend). AIX, Solaris, etc. are going to cause heartburn as they do things you don’t expect, or differently from other Unices. Turns out
./configure
has good reason for existing - special-casing#ifdef _AIX
for weird quirks or missing headers gets unsustainable, fast. C23’s preprocessor header checks might make this better though.I do rely heavily on things like
__has_include()
. Much better than a separate configure step IMO.https://github.com/tavianator/bfs/blob/3.0.1/src/config.h#L31
The overlap of people running those and the people that want to use the presented tool on those is probably 0.
then those OSs are not supported? Not everything in the world needs to be supported by all tools. If that is not the goal of the author why bother?
I don’t get this endless “oh but my obscure OS is not supported therefore your project is bad” talk. If you must use these OSes then you may not get all the shiny tools. Tough luck.
The point isn’t “you don’t support $OBSCURE_OS, so you’re a bad programmer”. The point is that it is the ubiquity of C compilers, and not the skill or capability of the author, that accounts for being able to download the source and compile it without additional steps. An unskilled and incapable programmer could also produce C code that can be downloaded and compiled anywhere that
cc
andmake
are installed.The author may certainly be skilled and capable, but this would be because of the code they wrote, not because of their choice of language and toolchain.
You could probably go even further and say “likely the same C compiler” as well these days.
No, quite a few Linux systems still ship with GCC.
It appears to be broken for cross compilation though, as it does configuration based on output of uname on the system it’s being built on.
That’s only to set some defaults. If you’re cross-compiling you can override them, e.g.
(I should probably add that to the docs.)
Hypothetically I could grab that info from
$(CC) -dumpmachine
so you’d only have to setCC
, but those target tuples are way harder to parse thanuname
. Andgcc -dumpmachine
doesn’t listen to-m32
.You need to install more dependencies: https://github.com/tavianator/bfs#installation
Or build without them: https://github.com/tavianator/bfs/blob/main/docs/BUILDING.md#dependencies
Yes, I’ve read the readme file after the compilation attempt. On the beginning I had thought to argument that a “full build toolchain” could pull the dependencies automatically, but then I’ve lost motivation to do any argumentation, and I’ve removed my comment. But you’ve replied earlier than I deleted it, so thanks I guess.
I find the wording quite misleading. There is no infection. They just use a standard feature that people used for a decade to restrict keys and run commands on remote hosts. Who would have guessed that you can do malicious things if you can run arbitrary commands…
yeah, it is not ideal wording. the benefit from an attacker’s perspective is that it’s a subtle way to get persistence after a successful intrusion (without needing a rootkit). I do think the post makes that fairly clear but the headline isn’t great.
This is the cambridge dictionary level definition of what is going on, and the de facto way ‘infection’ has been used in infosec since at least the days ‘viruses’ were being talked about (so mid eighties) and broadly for that matter (host, file, binary, registry, …) infected by (…) and also the same terminology EDR tools use to this day.
Piggybacking on standard features to hide in plain sight is very much a desired trait and a kind of misuse to definitely consider when introducing, as you put it, a ‘standard feature’. I did not know about this property of OSSH keyfiles and consider it an anti-feature big enough that I will absolutely patch it out on the few machines I still run OSSH on.
This one is great and definitely goes into both my red and blue teaming arsenals – even more-so now. High entropy blocks of data is suspect in ‘text files’ but absolutely expected in key files. Techniques published by the likes of THC, Phrack etc. very quickly become common practice.
Seems isopenbsdsecu.re is already on it.
Yeah, they’re obsessed with doling out poorly worded opinions about everything OpenBSD does.
Why would you call it poorly worded? It seems like a fairly level-headed assessment of OpenBSD’s security features. There’s praise and disapproval given based on the merits of each, comparing to other platforms as well.
If your takeaway from reading that website is a fairly level-headed assessment of anything then I’m not sure what to tell you. It’s my personal opinion that it’s anything but that.
The person who’s maintaining the website is one of the persons who’s doing the talk but not walking the walk, i.e. a blabbermouth.
Qualys on the other hand is actively trying to exploit the latest OpenSSH vulnerability and found some valid shortcomings in OpenBSD’s malloc. otto@ who wrote otto-malloc, acknowledged them and is already working on an improved version.
Programmers have a long and rich history with C, and that history has taught us many lessons. The chief lesson from that history must surely be that human beings, demonstrably, cannot write C code which is reliably safe over time. So I hope nobody says C is simple! It’s akin to assembly, appropriate as a compilation target, not as an implementation language except in extreme circumstances.
Which human beings? Did history also teach us that operating a scalpel on human flesh cannot be done reliably safe over time?
Perhaps the lesson is that the barrier of entry for an engineering job was way higher 40 years ago. If you would admit surgeons to a hospital after a “become a gutt-slicer in four weeks” program, I don’t think I need to detail what the result would be.
There’s nothing wrong with C, just like there’s nothing wrong with a scalpel. We might have more appropriate tools for some of its typical applications, but iC s still a proven useful tool.
Those who think their security burns will be solved by a gimmick such as changing programming language, are in for a very unpleasant surprise.
Given the number of memory safety bugs that have been found in 40-year-old code, I doubt it. The late ‘90s and early 2000s exposed a load of these bugs because this C code written by skilled engineers was exposed to a network full of malicious individuals for the first time. In the CHERI project, we’ve found memory safety bugs in code going back to the original UNIX releases. The idea that there was some mythical time in the past when programmers were real men who never introduced security bugs is just plain wrong. It’s also a weird attitude: a good work an doesn’t blame his tools because a good work an chooses good tools. Given a choice between a tool that can be easily operated to produce good results and one that, if used incredibly carefully, might achieve the same results, it’s not a sign of a good engineer to choose the latter.
Back then, the C programmers didn’t know about memory safety bugs and the kind of vulnerabilities we have since two decades. Similar, Javascript and HTML are surely two programming languages which are somewhat easier to write than C and doesn’t suffer from the same class of vulnerabilities. However, 20 years ago people wrote code in these two languages that suffer from XSS and other web based vulns. Heck, XSS and SQLi is still a thing nowadays.
What I like about C is that it forces the programmer to understand the OS below. Writing C without knowing about memory management, file descriptors, processes is doomed to fail. And this is what I miss today and maybe @pm in their comment hinted at. I conduct job interviews with people who consider themself senior and they only know the language and have little knowledge about the environment they’re working in.
Yes, and what we have now is a vast trove of projects written by very smart programmers, who do know the OS (and frequently work on it), and do know how CPUs work, and do know about memory safety problems, and yet still cannot avoid writing code that has bugs in it, and those bugs are subsequently exploitable.
Knowing how the hardware, OS (kernel and userspace), and programming language work is critical for safety or you will immediately screw up, rather than it being an eventual error.
People fail to understand that the prevalence of C/C++ and other memory unsafe languages has a massive performance cost: ASLR, Stack and heap canaries, etc and then in hardware: PAC, CFI, MTE, etc all have huge performance costs in modern hardware, are all necessary solely due to the need for the platform to mitigate the terrible safety of the code being run. That’s now all sunk cost of course: if you magically shifted all code today to something that was memory safe, the ASLR and various canaries costs would still be there - if you were super confident your OS could turn ASLR off, and you could compile canary free, but the underlying hardware is permanently stuck with those costs.
Forcing the programmer to understand the OS below could (and can) happen languages other than C. The main reason it doesn’t happen is that OS APIs, while being powerful, are also sharp objects that are easy to get wrong (I’ve fixed bugs in Janet at the OS/API level, I have a little experience there), so many languages that are higher level end up with wrappers that help encode assumptions that need to not be violated.
But, a lot of those low level functions are simply the bottom layer for userland code, rather than being The Best Possible Solution as such.
Not to say that low level APIs are necessarily bad, but given the stability requirements, they accumulate cruft.
The programmer and project that I have sometimes used as a point of comparison is more recent. I’m now about the same age that Richard Hipp was when he was doing his early work on SQLite. I admire him for writing SQLite from scratch in very portable C; the “from scratch” part enabled him to make it public domain, thus eliminating all (or at least most) legal barriers to adoption. And as I mentioned, it’s very portable, certainly more portable than Rust at this point (my current main open-source project is in Rust), though I suppose C++ comes pretty close.
Do you have any data on memory safety bugs in SQLite? I especially wonder how prone it was to memory safety bugs before TH3 was developed.
I think it did. It’s just that the alternative (not doing it) is generally much much worse.
There is no alternative to the scalpel (well, except there is in many circumstances and we do use them). But there can be alternatives to C. And I say that as someone who chose to write a new cryptographic library 5 years ago in C, because that was the only way I could achieve the portability I wanted.
C does have quite a few problems, many of which could be solved with a pre-processor similar to CFront. The grammar isn’t truly context free, the syntax has a number of quirks we have since learned to steer clear from.
switch
falls though by default. Macros are textual instead of acting at the AST level. Everything is mutable by default. It is all too easy to read uninitialised memory. Cleanup could use some more automation, either withdefer
or destructors. Not sure about generics, but we need easy to use ones. There is enough undefined behaviour that we have to treat compilers like sentient adversaries now.When used very carefully, with a stellar test suite and sanitisers all over the place, C is good enough for many things. It’s also the best I have in some circumstances. But it’s far from the end game even in its own turf. We can do better.
I was wondering why the repo owner seemed so familiar!
I don’t think that moving from a language that e.g. permits arbitrary pointer arithmetic, or memory copy operations without bounds checking, to a language that disallows these things by construction, can be reasonably characterized as a gimmick.
This isn’t a great analogy, but let’s roll with it. I think it’s uncontroversial to say that neither C nor scalpels can be used at a macro scale without significant (and avoidable) negative outcomes. I don’t know if that means there is something wrong with them, but I do know that it means nobody should be reaching for them as a general or default way to solve a given problem. Relatively few problems of the human body demand a scalpel; relatively few problems in computation demand C.
That’s a poor analogy.
What we would consider “modern” surgery had a low success rate, and a high straight up fatality rate.
If we are super generous, let’s say C is a scalpel. In that case we can look at the past and see a great many deaths were caused by people using a scalpel, long after it was established that there was a significant differences in morbidity when comparing a scalpel, to a sterilized scalpel.
What we have currently is a world where we have C (and similar), which will work significantly better than all the tools the preceded it, but is also very clearly less safe than any modern safe language.
There is an update from Theo which explains the whole design and its benefits in great detail: https://marc.info/?l=openbsd-tech&m=166874067828564&w=2