Redis Inc wants control over the crate to guarantee that their customers receive fixes and new features in a timely manner (quote from Mirko Ortensi, a Redis Inc employee):
I have observed an increased interest in a Redis client library for the Rust language. Similarly to what has happened for other client libraries, besides contributing to the library itself, I’d love to offer Redis users (and customers) guarantees on the release cycle, bug fixing, fast support in escalations, and a realistic roadmap.
…but the community at large is worried that Redis Inc will hinder support for Redis-compatible databases. Support for Valkey (a direct fork of Redis owned by the Linux Foundation) is obviously not a priority for Redis Inc (quote from Madelyn Olson, a Valkey maintainer):
I’m more worried about the lack of an open governance and Redis priorizing their functionality at the expense of others. An example is client side caching in redis-py, https://github.com/redis/redis-py/blob/3d45064bb5d0b60d0d33360edff2697297303130/redis/connection.py#L792. I’ve tested it and it works just fine on valkey 7.2, but there is a gate that checks if it’s not Redis and throws an exception. I think this is the behavior that might spread.
There is also more context on this issue in the thread.
Ultimately this seems to have been settled amicably, with the current maintainers retaining control of the crate and its releases and Redis Inc working with the rest of the community.
To me it looks like Redis Inc wants to have their cake and eat it too, with them probably not having enough manpower to maintain the client libraries but also wanting strict control over the releases so that they can provide their clients with fixes when needed. They also don’t care much about compatibility with competing open source databases, which means they might either passively or actively block PRs and hinder support for those once they’re in control of the clients.
To me it looks like Redis Inc wants to have their cake and eat it too, with them probably not having enough manpower to maintain the client libraries but also wanting strict control over the releases so that they can provide their clients with fixes when needed.
It sounds like they’re pricing their product badly if they can’t afford to develop and maintain client libraries to the standards that their customers want.
I spent a considerable amount of time moving my Nix config to snowfall lib. I’m not completely convinced it was a good use of my time, but these past few weeks I’ve been feeling like not much is. Tomorrow I’ll be buying a new stovetop to replace the old extremely inefficient one that came with my kitchen, biking in the city, and maybe considering what would a good use of my time look like.
Puzzling over UnRAR’s source code, probably. I just ported the filiename encoding algorithm, and what a mess that is! Why isn’t it just UTF-8? My goal with this project is to make sense of the format so I’m gonna try to simplify it this part, but I’m not making any promises.
Other than that, doing the laundry, seeing friends, and playing Mario Kart with my partner on the used Wii I got last week.
I’m laughing a bit at this because I remember those days. WinRAR was first released in 1993 and UTF-8 was first presented at USENIX in… 1993. There was no OS support right away. Keeping in mind that desktop PC users would have been using DOS and Windows 3.1 at the time, which barely had support for any character encoding stuff at all other than globally configuring code pages and hoping for the best. It was truly the Wild West back then for internationalization work.
Yeah the older versions of the format are encoded using the global OEM code page… But this one in particular is (I think) much older than Unicode and the flag that enables parsing this format in unrar’s source code is called UNICODE!
I suppose Roshal had his reasons for doing it this way, my theory is that it’s to keep the original encoding while also making it easy to translate to Unicode. The test string I included is valid Shift-JIS up until the 0 byte, for example, and the rest of the string contains instructions on how many bytes each character takes up and what to prefix them with to turn them into valid Unicode.
There is a bit of amusement in the author sysadminning for their father, who then also sysadmins for others. Its also very understandable/relatable.
Being able to say something mildly contrived like “I’d like to have Firefox installed with home, pocket, et al, disabled”, and then 10-30 minutes later, you get told its ready for you upon next restart, no more work required. Sounds like a dream.
I do wonder if this would be a task suitable for AI in a way (I know its not, I just tried and it constantly gave wrong paths for everything)
I used to be the same. However I quickly learned that I am not available enough for my users.
My partner hates maintaining her own system, however she hates even more when a printer stops working, or when she needs to install some new software, and I’m unavailable.
So even though she does hate maintaining her own system, she still wants one that she can manage herself. While I agree with most of what is said, I wouldn’t say these non-enthusiast-friendly OSes are based on “fundamentally wrong” principles.
I maintain my position that they are based on fundamentally wrong principles, even in the case you described. An intentionally limited, non-enthusiast-oriented OS such as the GNOME OS proposed in the blog post linked from mine, would likely not work for your partner, unless she does not wish to install software that isn’t part of said OS (nor of a third-party software store like flathub).
A general purpose operating system will have more packages, and can still remain maintainable by someone who’s not a sysadmin by trade. Limiting software availability is not helping non-enthusiasts.
On the NixOS computer I set up for my partner I enabled appimages (installed/updated through Gear Lever), Flatpak and the GNOME Software Center, so she can install and update programs without my intervention.
She definitely doesn’t want to maintain the system herself, so stuff like drivers and system software are managed by me and generally shouldn’t break because I handle the updates myself (and even if they do, she can boot from an old generation until I’m available to take a look), but she might need to install new programs or update Chrome every now and then, so I think this is a nice middle ground.
I mean, that sounds like the setup still makes sense, but the user is allowed/able to fix stuff on their own, either using the same method, or just doing it and then having etckeeper or similar running to record the change and codify it afterwards.
Working on my RIIR version of unrar. The fun part is that RAR is actually three incompatible formats, and while the most recent revision is well documented, the older ones are kind of defined by the unrar source code and the second one in particular is messy and full of minor revisions and “deprecated” behavior.
I got it to parse all the the archive headers, which means that I can list the files inside all three format versions!
I’m getting a Wii from marketplace tomorrow. I’ve tried emulating wii games on Dolphin but those that require motion controls are just waaaay too fiddly to play without the proper controllers. I’m really looking forward to playing some games I’ve been missing out on because of the motion controls. I’m also considering getting a Dolphin Bar to get the best of both worlds, now that I’m gonna have some original wii controllers to use it with.
I also started using my split ergo keyboard full time, because some of the cheap gateron blue knockoffs on my old 30 euro keyboard are not registering well anymore, and I really don’t feel like desoldering them… I think I’ll steal some ideas from the thumb cluster post that’s on the front page right now because the weird layout I hacked together is not working very well for me.
Taking a break until Wednesday. I just came back from vacation with friends and I decided I’ll give myself a day of buffer between coming home from a vacation and going back to work from now on. I guess I’ll play some videogames and maybe try to write a blog post about something. I’ve been wanting to write more so I should just start at some point.
At work leadership decided to switch the teams around (again) and I’ll be starting to work on a new team which will basically pick up the slack on customer bugs/requests for other teams. I did not ask for this change of teams and focus and very much liked the previous team so I’m kind of bummed out about this whole thing. I’ve come around to thinking that maybe it’ll be nice to interact with more people and get to know the whole codebase and the product much better, but I’m worried that it’ll be very stressing.
Was expecting the obligatory shitpost, and there it is elsewhere in the comments (I’m not referring to the CVE itself).
Yes, safer languages are better. Yes, it’s desirable to rewrite / port to them from C. No, C is not a good language to target in 2024, for most purposes (/me eyes his Amstrad CPC in the corner).
However I don’t think the C shitposts add much value.
How much effort would it take to:
rewrite or port all of X.org to a safer language
ensure the safety features of that language worked in the face of any weird shit the old protocols and implementations might be doing
ensure that every popular system X.org is developed on has first class support for the safer language
ensure that every popular system X.org runs on is at least targeted by the safer language
upgrade all possible systems to the new replacement
I believe this would be a project of comparable scale to Wayland, and that’s taken 16 years and still isn’t a complete replacement for many folks, despite several vendor attempts to serve it raw.
That’s not a criticism of Wayland: my point is that software takes time (and thus money; either sponsorship or donated time with associated opportunity costs). Lots of it. From 23 years ago:
I’m a big Rust advocate, but… we already solved this particular problem many years ago when we realized that the X server has a large attack surface, by not running it as root. Most native-Xorg systems should be running without root these days, and every Wayland system runs XWayland as non-root. The X protocol doesn’t have any privilege separation, so if you’re trying to fully containerize X11 applications you already need to run a dedicated XWayland instance in the container to lock down its privileges anyway… which makes this whole thing basically irrelevant.
The X.Org server is practically unmaintained outside of XWayland, so I don’t think we should be spending any time worrying about real or potential CVEs which don’t affect systems that already implement the X11 protocol securely.
It’s been a long time since I’ve looked inside X.org but I have a very strong feeling that, especially due to its age, it’s going to have a lot of patterns inside it that are both correct and not-borrow-checker-compatible. One of my personal rules for “porting” is that there’s two modes:
porting, where you’re converting code from one language/framework/platform/whatever to another
working on software improvements
The only way, in my experience, that you can do a successful port is by very rigidly keeping everything as architecturally similar as possible and with feature parity. Second system effect is too strong and there’s a huge risk that you end up with a new piece of software that implements half of the old system, plus some new stuff, but doesn’t actually fully replace the original.
You didn’t say Rust, but I’m sure people are thinking it. My speculation (informed with having worked with multi-decade-old code many times) is that there would have to be either a massive re-architecture effort or vast swaths of unsafe code to move Xorg to Rust. And I’m also going to speculate that Xorg doesn’t have a formal test suite; even if it does, Hyrum’s Law applies here: every observable behaviour of Xorg is now “part of the API” that userland software somewhere is going to be relying on. If the rewritten version isn’t a drop-in replacement, there’s going to be enough users complaining about things being broken that there’s going to be a serious traction issue. Some of the code that talks to Xorg is going to be 30+ years old and while it won’t have a huge userbase… there’ll be enough for it to be a huge problem.
For my taste, you’ve just expressed precisely why it’s both obtuse and irrational (ie borderline trolling) to post little quips about x.org being implemented in the “wrong” programming language.
I think we all agree what ‘correct’ means. If the code is correct, how come it has a vulnerability? I’d posit that it is in fact not correct. It might be possible to make it correct. We have tools to help with that.
The borrow checker is just one tool people can use to improve software quality. Another is to e.g. use fuzz testing or use program extraction from a proof assistant. Why not just qualify the label ‘correct’ then? There is a difference between suspecting a program correct, and having proven it.
Allowing the term is equivalent to accepting a proof sketch as a proof. We should have a higher standards by now.
I think we all agree what ‘correct’ means. If the code is correct, how come it has a vulnerability? I’d posit that it is in fact not correct. It might be possible to make it correct. We have tools to help with that.
Very fair. The point I was going for is that code can be correct while also behaving in a way that’s incompatible with e.g. Rust’s borrow checker. The borrow checker with the “many read XOR one mutable” rule is one approach for guaranteeing correctness but isn’t strictly the only way.
it’s going to have a lot of patterns inside it that are both correct and not-borrow-checker-compatible
Yup that’s exactly what I meant by “ensure the safety features of that language worked in the face of any weird shit the old protocols and implementations might be doing”.
I should have made the implication clear: I believe there’d be a lot of such weirdness in that old and complex a codebase.
You didn’t say Rust
Yup :) Very deliberately so :) There are enough flamewars over that topic that I thought I’d keep it technology-neutral.
On the topic of “how much effort would it take to X”, I think a good joke here is that it might be possible that more effort has been put into writing apologetics for memory unsafety than solutions for memory unsafety!
Seriously though, how about we all just come clean and admit that the vast majority of open source work is done on an unpaid volunteer basis which by necessity of the global economy under which we all live, will only ever have an incredibly small amount of energy put into it at all, and so no matter what engineering task you’re talking about if it is non trivial it will probably take at least half a decade longer to complete than if it was developed professionally, with adequate resources and all that.
The Morello system under my desk isn’t running X.org, it’s running the KDE Wayland server, but the quantity of C code is similar. The entire graphics stack, from the in-kernel drivers, through the userspace ones and the display server, to userspace GUI apps, are all memory safe. A bug of this nature would crash the display server but would not trigger privilege elevation.
The only reason that this isn’t mainstream is that CPU vendors are not seeing demand. Arm would move their post-Morello architecture to final if one major partner demanded it. With zero rewrites, we could eliminate these vulnerabilities in one upgrade cycle if the industry chose to, with existing proven technology.
Rewriting in a safe language is a much larger project than Wayland. Some Wayland compositors are written in memory-safe languages but Wayland reuses a lot of code from X.org. A lot of the low-level bits of the graphics stack are shared.
I suspect that was largely a retroactive justifications, but the places it’s true were typically higher up the graphics stack. All of the KMS / DRI code, for example, is shared.
Was this in response to a different post, perhaps one that was merged and deleted? It now appears as a reply to the CVE story itself, and I’m scratching my head at the term “shitpost” being applied here.
Sorry for that. For the record I meant “obligatory” as in “yes, Xe has posted about this one too”, but I see that it could be read in a serious manner. I agree with you that it would not be desirable to rewrite the X server in a safe language; I just thought it was funny in the moment.
Continuing to try and get the webcam working on the Surface Pro 5. There’s some weird stuff going on in libcamera about signing shared libraries that doesn’t go very well with Nix’s build process. I think I found a fix for it and now I’m waiting for my desktop to rebuild two versions of webkitgtk so I can try it out. I really, really hope I don’t ever have to compile webkitgtk ever again in my life.
On Halloween night I’m going to a local electronic music festival to see Dean Blunt do a DJ set, which will probably be very weird, and then Mica Levi, Yaeji and some others. And then a few days of vacation with some friends. This time I took an extra day off work after I come back from the trip so I can have a good rest.
Waiting for my partner’s new (refurbished) Surface tablet to compile the linux-surface kernel, apparently… I thought this was going to take much less.
Edit: it was taking almost 2 hours so I decided to kill it and build the kernel on my desktop PC. I should be able to make the tablet use the results by running nixos-install with the --option substituters <URL> flag and serve the store with nix-serve on the desktop.
Well, now I’m here watching my desktop recompile webkitgtk because it depends on wireplumber, and wireplumber depends on libcamera, and libcamera does not work with the default compilation flags on the Surface and causes wireplumber to segfault, and wireplumber crashing brings down sound altogether… and I’m just wondering if it wouldn’t be better to leave Windows on this damn thing.
I found that improving typing speed improved my code, not because I got the code down faster, but because I was willing to type more. When I started programming, aged 7, I typed with two fingers and had to search the keyboard to find keys. I used single-letter variable names everywhere because typing longer ones was hard. I rarely wrote comments, because writing full English sentences was hard. By the time I was an adult and I typed far faster than I ever wrote with a pen, writing a meaningful variable name cost nothing but the value when I came to read the code was immense. Cost-benefit tradeoffs are very different when the cost drops to nearly zero.
Absolutely, and I’m noticing this now that I’m basically relearning how to type because I assembled a split keyboard and configured it with the colemak layout. I can type about 35wpm in monkeytype with only lowercase letters and spaces at the moment, but I haven’t committed all the symbols to memory yet so typing code is very slow.
I tried doing a full day of work with the new keyboard yesterday but it didn’t go well. I found that having to actively think about where all the symbols are and making more errors while typing makes trying out different approaches much less desirable, and it constrains my ability to reason about code simply because I’m holding one more thought process in my head that wasn’t there before.
Definitely! I’m practicing every day and I started using the new keyboard almost exclusively when I’m off work. I think the split keyboard did a lot for my motivation. I really want to switch to it because it feels very comfortable to use (especially so with colemak) and, well, it’s a shiny new toy :) And since it uses QMK firmware I can even identify uncomfortable key combos or symbol positions and iterate to improve them, which triggers the tinkerer part of my brain.
There are adults who still do this. Anyone who wants their code to be accepted and adopted by other people should absolutely learn to touch type. It’s never too late.
I’d argue that my typing speed mostly helps me in communication and research rather than coding, since I spend a lot more time thinking than I spend typing, in most of my projects. As @eterps pointed out in a different thread, that’s also an essential part of developing software, but I want to make a slightly different point.
However, touch typing is still one of the most useful computer skills I ever acquired. It certainly allows me to communicate in messaging apps effectively because the cost of typing is very low for me. After all, IRC and flamewars in FOSS channels is where I practiced touch typing most. ;)
It also helps me use different devices. My current laptop has keyboard with a British layout, which includes some design decisions beyond my comprehension, such as the pipe character (|) only accessible via AltGr-Shift-L and a key for #/~ above the Enter key. But I don’t actually care what is written on the keys because I don’t look at those keys anyway — my layout is set to US English with | where it, arguably, should be; and #, ~, etc., where I expect them to be.
I agree with you about the importance of written communication in talking about software design. And I, too, have a non-English (Spanish in my case) laptop and changed the layout to US English, which took a little getting used to, but was fine. …And I also stand by my assertion that there are adults who I’ve seen with my own eyes use single-letter variable names everywhere because they don‘t know how to touch type.
I installed Xubuntu on my partner’s laptop a while ago and they’ve been very happy with the switch, save for when the time came to update.
They’ve been using this laptop from 2013 with 4GB of RAM and a Windows 10 installation on an HDD, which was atrociously slow. It took several minutes to reach the user select screen, and after logging in it was 5 more minutes for Windows to stop spinning the disk before the system was usable.
After trying to alleviate the pain a couple times I figured it was time to move the system to an SSD and, after figuring out my partner’s needs and with their consent, I also installed Xubuntu on the laptop. Haven’t heard any complaints other than the occasional snag with sleep/hibernation, but since the laptop rebooted in a couple seconds they weren’t a big issue.
The big pain point in my experience is still the updates: the latest dist-upgrade from 22.04 to 24.04 failed partway through for some reason I haven’t yet determined, and left them with an unbootable system. I had to come in with an Ubuntu USB and chroot into the installation to make it finish the upgrade and fix some packages that were uninstalled for some reason. Now the laptop works fine but they’re experiencing random freezes (probably due to the nouveau driver or some faulty hardware). I could probably fix it but we’re kind of using it as an excuse to get a newer and less bulky laptop, so I guess that worked out in our favour :)
The next laptop will also have Linux on it, but I’m gonna install an immutable distro this time.
I also recently had an issue with my NixOS laptop after an upgrade where it would freeze on high CPU loads and didn’t come back from sleep (my fault for setting the kernel version to latest, lol), and I urgently needed it to work for an event I was attending later that evening. I really appreciated that I could boot into the previous generation that still worked and resolve the issue later.
Maybe I’m just high on the Nix pill, but I think immutable distros are a huge improvement in usability for everyone using Linux. Things still fail from time to time, and being able to temporarily switch back to a system that still works until the issue is fixed is one of the missing pieces to bringing Linux to the masses, at least in my opinion.
using mind-blowing techniques like using Uint8Arrays as bit vectors
I think this is reflective of a real culture difference. For someone who is used to systems languages, a dense bit vector is a standard tool, as unremarkable as using a hashmap. One of the reasons that optimization is so frustrating in javascript is that many other standard tools, like an array of structs, are just not possible to represent. It feels like working with one hand tied behind your back. When you’re used to being able to write simple code that produces efficient representations, it’s hard to explain to someone who isn’t used to that just how much harder their life is.
Another reason is that the jit is very hard to reason about in a composable way. Changes in one part of your code can bubble up through heuristics to cause another, seemingly unrelated, part of your code to become slower. Some abstraction method might be cheap enough to use, except on Tuesdays when it blows your budget. Or your optimization improves performance in v8 today but tomorrow the heuristics change. These kinds of problems definitely happen in c and rust too (eg), but much less often.
Even very simply things can be hairy. Eg I often want to use a pair of integers as the key to a map. In javascript I have to convert them to a string first, which means that every insert causes a separate heap allocation and every lookup has to do unpredictable loads against several strings. Something that I would expect to be fast becomes slow for annoying reasons that are nearly impossible to work around.
It’s definitely possible to write fast code in javascript by just turing-tarpitting everything into primitive types, but the bigger the project it gets the larger the programmer overhead becomes.
I definitely recognize the value of a familiar language though, and if you can get within a few X of the performance without too much tarpit then maybe that’s worth it. But I wouldn’t volunteer myself for it. Too frustrating.
That line also struck me in particular. Using a Uint8Array as a bitvector is definitely something I’ve thought of, if not even implemented, a few times in JS. As you said, JS seems to actively work against you when you’re trying to optimize your code in ways that would work in other languages, so it disincentivizes you from trying to reason about performance.
There’s a lot to be said about consistency in object representation and opimizations in “dynamic” languages. OCaml is renowned to be a fast language not because it does crazy whole-program optimizations (the compiler is very fast precisely because it does no such thing) or because its memory representation of objects is particularly memory efficient (it actually allocates a lot; even int64s are heap allocated!), but because it is straightforward and consistent, which allows you to reason about performance very easily when you need it.
The thing about this is that it doesn’t even feel like it’s “optimization” – as much as non-pessimization of how one would store such data. A single block of memory seems straightforward, with the more complicated way being to have a bunch of separate heap allocated objects that are individually managed. That’s what ends up feeling nice to me about the languages JS is being contrasted with here: I actually just write the basic implementation that is a word-for-word reflection of the design and get a baseline performance right off the bat that lets me go for a long time (usually indefinitely) before having to consider ‘optimizing’. The productivity boost offered alongside not giving that seems to me like it would have to be really really high to be worth the tradeoff, and personally I haven’t felt like JS delivers on that.
That’s true, but there’s a big tradeoff in flexibility there. If you want to store these objects in continuous memory now your language needs to know how big these objects are and may be in the future, which effectively means you need a type system of some sort. If you want to move parts of this objects out of it then you need to explicitly copy them and free the original object, so either you or your language needs to know where these objects are and who they belong to. If you want to stack-allocate them now your functions also need to know how big these objects are, and if you want the functions to be generic you’ll have to go through a monomorphization step, which means your compiler will be slower.
I’m not saying these properties are not valuable, but the complexity needs to lie somewhere. JS trades the ease of not having to keep track of all of this with a more complicated and “pessimized” memory model, while languages that let you have this level of control are superficially much harder to use.
Makes sense, but I’ve just found that what I pay in each of those scenarios comes up when each of those concretely happens and is often / usually a payment that seems consistent with the requirement and can be done in a way that’s pertinent to the context (I tend to not try to address all of those upfront before they actually happen, at this point). However, the performance requirement tends to either be always present or even if not present just be a mark of quality or discovery regardless (eg. if I can just have more entities in my game in my initial prototype I may discover kinds of gameplay I wouldn’t have otherwise).
I think I don’t consider those tradeoffs you’re listing very negatively because the type system usually comes with the language I use and I consider this to be a baseline for various sorts of productivity and implementation benefits at this point (which seems to also be demonstrated in the case of JS by how much TypeScript is a thing – it’s just that TypeScript makes you write types but doesn’t give you the performance and some other benefits that would’ve come with them elsewhere). Similar with stack allocation and value types.
I’m not seeing how the other side of that tradeoff is ‘flexibility’ specifically though, because my experience is that the type system lets me refactor to other approaches with help from the language’s tooling, and so far the discussion has been about how the JS approach is actually inflexible regarding that.
Re: ‘the complexity needs to lie somewhere’ – the point is that in the base case (eg. iterating an array of structs of numbers) there is actually less essential complexity and so it may not have to lie anywhere. It’s just that JS inserts the complexity even when it wasn’t necessary. It seems more prudent to only introduce the complexity when the circumstances that require it do arise, and to orient the design to those requirements contextually.
All that said, I’m mostly just sharing my current personal leaning on things that leads to my choice of tools. I just feel like I wasn’t actually that much more productive in JS-like environments, maybe even less so. On the whole I’m a supporter of trying various languages and tools, learning from them and using whatever one wants to.
I agree with everything you said. I’m just not sure that we should be defaulting to writing everything in Rust (as much as I like it) or some other similar language. I think I wish for some sort of middle ground.
Agreed on that. I definitely don’t think that the current languages – on any of the points in this spectrum – are clear winners yet, or that it’s a solved space. I avoided mentioning any particular languages on the ‘more control’ side for that reason.
Aside:
Lately I’ve been playing with C along with a custom code generator (which generates container types like TArray for each T and also ‘trait’ functions like TDealloc and TClone etc. that recurse on fields) of all things. Far from something I’d recommend seriously for sure, just a fun experiment for side projects. It’s been a surprising combination of ergonomics and clarity. Turns out __attribute__((cleanup)) in GCC and clang allow implementing defer as well. Would be cool to get any basic attempt at flow-sensitive analysis (like borrow-checking) working in this if possible, but I’ve been carrying on with heavy use of lsan (runs every 5 seconds throughout app lifetime) and asan during development for now.
Just chilling, going out, doing things in the real world… If I have some time to spare I’ll practice typing colemak-dh on the split keyboard (I got to letter P on keybr!) and maybe fix up the firmware for my Watchy.
Not much that I remember! I’m in the middle of rewriting the default firmware with arduino-cli but I’m mostly planning to use it as a normal watch. I’ve seen other alternative firmware with more features but afaik it’s mostly WIP. I think it would be cool to use it to get notifications from a phone with GadgetBridge, and apparently somebody’s implemented something similar for another ESP32 device, but it hasn’t really happened yet.
I was also in the middle of reimplementing the firmware and drivers in Rust, but LLVM and rustc both need significant patches to support the Xtensa arch (which is what runs on the ESP32) and trying to build that on NixOS has been challenging… so for now I shelved that project until the Xtensa patches have been merged upstream (hopefully sometime next year).
I finally got the split keyboard all set up. Got the replacement microcontroller in the mail this morning and then spent the lunch break soldering it to the right half of the keyboard (with socketed pin headers this time). I was very glad to discover that the PCB came out of this ordeal completely unscathed! So I guess I’ll spend this saturday relearning how to type and customizing my layout :)
Have a picture. (Keycap location for the right half is a WIP and does not represent the final look of the keyboard.)
At work: continuing my investigation into why it’s taking so long to upload large files into our system. I almost hope I get to do some gnarly performance work and refactoring this time because the last part turned out just to be long held database locks, and the solution was to wait for my colleague to optimize another piece of code.
In the spare time I’m making some Nushell wrappers for all the various things I interact with at work (the usual stuff: Jira, Google Cloud, Kubernetes, GitLab) because… I just prefer the command line, and very often there’s an annoying amount of slow web pages and bad autocomplete and copying and pasting I have to go through just to get to the logs of the pods I just deployed, or to go through a ticket I’m working on, and so on. So now, little by little, I’m trying to automate all that annoying stuff away by piping some APIs together.
At home: waiting for a replacement microcontroller for my split Sofle keyboard to arrive. I killed the original one while desoldering it because I realized I was reading the wrong assembly instructions for the kit that I had and the microcontroller was supposed to be assembled facing the other way. I got socketed pin headers this time. I’m planning to put it back together and test whether the PCB still works fine, and if not I get to wait another week for a (discounted) replacement kit to arrive.
I feel very much sorry for all users of those tools we have to wrap with command line scripts. But I do enjoy the scripting and command line as much as you probably.
I wish you lot of fun automating this ;-) , I also feel more productive in the terminal so I try to reach most of them through the terminal if possible :-) or automate things so they remain out of my sight.
Thanks! I think we definitely think alike in this regard. I also feel much more productive in the terminal, and I love nushell in particular because of how easy it is to pipe stuff that speaks JSON together.
Today I discovered that Jira’s API is much more easily easily accessible than I thought (you just need to generate an API token and use HTTP basic auth) and I made a simple command to view my assigned open issues in a table with some info and links. It’s not much, but I enjoyed it :)
speaking of, is this your first DIY split ? Do you have experience with homebrew electronics in general? Asking bc I’d like to get myself a fancy split kbd but a bit intimidated by the assembly and parts picking.
Yes it’s my first DIY split and no I don’t have much experience with homebrew electronics, I only had a bit of experience with soldering before this. I was planning to make a blog post at the end of the process but this is the gist of it:
Find a split keyboard you like (there’s a list of vendors by region on the ergo mech boards reddit wiki) and look up all the parts and options. Do some research about what all of those things mean. Take a look at the build guide before buying.
I would recommend going with hotswap parts where you can: get a board that supports Kailh hotswap sockets for the switches, and definitely get socketed pin headers for the microcontrollers! If you go with normal pin headers and you accidentally burn the microcontroller it’ll be very hard to desolder. Hotswap socket are very easy to solder, and for pin headers here’s a useful tutorial video.
Get some equipment for soldering. You’ll need a soldering iron, soldering wire, flux, brass sponge, desoldering wick, kapton (heat resistant) tape, and if you want a desoldering hand pump. You don’t need expensive equipment for this, as long as you can set the temperature of the iron it’s probably fine. Look up some videos for beginners, seeing how it’s done is very useful if you’re starting out.
Use low temperatures when soldering. I saw a guide that recommended around 325°C if you’re starting out, and that’s worked out for me.
Learn how to clean and tin the soldering iron tip. You’ll need to do when it looks like the tip isn’t conducting the heat very well or when the solder sticks to it; don’t raise the temperature.
If you need to desolder something, douse the tip of the desoldering wick in flux first! I didn’t do this at first and was wondering why the wick wasn’t working very well.
Remember to follow the build guides for your exact keyboard model closely :)
Assembly is really not that hard as long as you do some preparation first. The hardest part for me was soldering (and then desoldering ;_;) the microcontroller, and if you get socketed pin headers you’ll make your life a lot easier. Good luck!
while we’re at it: I notice most (all?) the split keyboards have a cable connecting the two halves. Are you aware of a variant with a wireless link? I imagine bluetooth would add some latency but I’m not a supersonic typist anyway and I would appreciate having fewer cables lying around the desk.
No idea! I personally prefer cables so I haven’t looked into it.
I suppose that since most split keyboards have one microcontroller per half and they can operate independently, you could theoretically put BT modules on both halves and connect them to the PC as two separate devices, but then if you activate a layer on one half it will only affect that half. But again, I haven’t looked into it so maybe there’s other options.
Finishing the second half of my ergonomic mechanical keyboard, with any luck! I got a Sofle keyboard kit and managed to solder and assemble half of it just fine, I tried it and it all works.
I assembled the second half as well, but managed to mess something up while soldering (and also forgot to bridge the OLED just under the microcontroller, which is a massive pain) and now I get to figure out what’s wrong with it with very little electronics knowledge, yay!
I don’t see anything obviously wrong with how I soldered it so I’m probably gonna go ask in the kit vendor’s discord and present my findings. Hopefully I can get that done before the end of the weekend.
Ok I found the issue: the microcontroller in the right half of the keyboard needs to be mounted upside down in V2 of this design, which is what I have… I ordered a desoldering pump.
I guess you might’ve read this now but it’s upside down because the two sides use the exact same design, and you just flipped one. That’s so that when ordering PCBs for personal use, you don’t need 2 * $min-order-size, just one batch is enough!
I have a sofle too though bought it second-hand, so pre-built. I found adjusting to columnar keys harder than the split. Hope you’ll like it!
EDIT: and in case you haven’t read it yet, don’t mess with the TTRS cable while powered to avoid shorts :)
it’s upside down because the two sides use the exact same design, and you just flipped one.
Yup, I figured! The problem is the vendor only had a guide for the V1, and I didn’t bother to look up the guide for V2… When I was assembling the left half I thought the jumper pads were somehow in charge of telling the PCB which side the microcontroller was on, but as it turns out, it does that but only for the OLED screen pins!
don’t mess with the TTRS cable while powered to avoid shorts :)
Yeah, at first I diagnosed the problem to be the TRRS jack because when I tested for continuity two of the contacts were bridged, but in retrospect the one that should not have been ground must’ve been connected to the pin which, when you flip the microcontroller, becomes the second GND pin. I also read up on the jack’s contacts and all the advice about not hotplugging the TRRS jack while the keyboard is on suddenly made a lot of sense :)
Anyway, progress report: after a lot of pain I managed to pry away the microcontroller, but I must have fried it in the process because it doesn’t respond anymore when I connect it to my PC. On the other hand, I learned how to properly use flux and the desoldering wick, and I gained a new appreciation for socketed pin headers, which I ordered along with a new microcontroller. I really, really hope that the PCB still works fine.
The GitHub issue where this was discussed offers some better context on this ordeal. https://github.com/redis-rs/redis-rs/issues/1419
Redis Inc wants control over the crate to guarantee that their customers receive fixes and new features in a timely manner (quote from Mirko Ortensi, a Redis Inc employee):
…but the community at large is worried that Redis Inc will hinder support for Redis-compatible databases. Support for Valkey (a direct fork of Redis owned by the Linux Foundation) is obviously not a priority for Redis Inc (quote from Madelyn Olson, a Valkey maintainer):
There is also more context on this issue in the thread.
Ultimately this seems to have been settled amicably, with the current maintainers retaining control of the crate and its releases and Redis Inc working with the rest of the community.
To me it looks like Redis Inc wants to have their cake and eat it too, with them probably not having enough manpower to maintain the client libraries but also wanting strict control over the releases so that they can provide their clients with fixes when needed. They also don’t care much about compatibility with competing open source databases, which means they might either passively or actively block PRs and hinder support for those once they’re in control of the clients.
It sounds like they’re pricing their product badly if they can’t afford to develop and maintain client libraries to the standards that their customers want.
Or maybe they’re just not willing to hire, if they figure they can use labor from the open source community instead.
I spent a considerable amount of time moving my Nix config to snowfall lib. I’m not completely convinced it was a good use of my time, but these past few weeks I’ve been feeling like not much is. Tomorrow I’ll be buying a new stovetop to replace the old extremely inefficient one that came with my kitchen, biking in the city, and maybe considering what would a good use of my time look like.
Puzzling over UnRAR’s source code, probably. I just ported the filiename encoding algorithm, and what a mess that is! Why isn’t it just UTF-8? My goal with this project is to make sense of the format so I’m gonna try to simplify it this part, but I’m not making any promises.
Other than that, doing the laundry, seeing friends, and playing Mario Kart with my partner on the used Wii I got last week.
I’m laughing a bit at this because I remember those days. WinRAR was first released in 1993 and UTF-8 was first presented at USENIX in… 1993. There was no OS support right away. Keeping in mind that desktop PC users would have been using DOS and Windows 3.1 at the time, which barely had support for any character encoding stuff at all other than globally configuring code pages and hoping for the best. It was truly the Wild West back then for internationalization work.
Yeah the older versions of the format are encoded using the global OEM code page… But this one in particular is (I think) much older than Unicode and the flag that enables parsing this format in unrar’s source code is called
UNICODE
!I suppose Roshal had his reasons for doing it this way, my theory is that it’s to keep the original encoding while also making it easy to translate to Unicode. The test string I included is valid Shift-JIS up until the 0 byte, for example, and the rest of the string contains instructions on how many bytes each character takes up and what to prefix them with to turn them into valid Unicode.
That’s reminding me of the old COMPOUND_TEXT atom type in X11.
There is a bit of amusement in the author sysadminning for their father, who then also sysadmins for others. Its also very understandable/relatable.
Being able to say something mildly contrived like “I’d like to have Firefox installed with home, pocket, et al, disabled”, and then 10-30 minutes later, you get told its ready for you upon next restart, no more work required. Sounds like a dream.
I do wonder if this would be a task suitable for AI in a way (I know its not, I just tried and it constantly gave wrong paths for everything)
You can absolutely do that with NixOS and home-manager :)
I used to be the same. However I quickly learned that I am not available enough for my users.
My partner hates maintaining her own system, however she hates even more when a printer stops working, or when she needs to install some new software, and I’m unavailable.
So even though she does hate maintaining her own system, she still wants one that she can manage herself. While I agree with most of what is said, I wouldn’t say these non-enthusiast-friendly OSes are based on “fundamentally wrong” principles.
I maintain my position that they are based on fundamentally wrong principles, even in the case you described. An intentionally limited, non-enthusiast-oriented OS such as the GNOME OS proposed in the blog post linked from mine, would likely not work for your partner, unless she does not wish to install software that isn’t part of said OS (nor of a third-party software store like flathub).
A general purpose operating system will have more packages, and can still remain maintainable by someone who’s not a sysadmin by trade. Limiting software availability is not helping non-enthusiasts.
Now that I can agree with. A good clarification.
On the NixOS computer I set up for my partner I enabled appimages (installed/updated through Gear Lever), Flatpak and the GNOME Software Center, so she can install and update programs without my intervention.
She definitely doesn’t want to maintain the system herself, so stuff like drivers and system software are managed by me and generally shouldn’t break because I handle the updates myself (and even if they do, she can boot from an old generation until I’m available to take a look), but she might need to install new programs or update Chrome every now and then, so I think this is a nice middle ground.
That’s clever. I like it!
I mean, that sounds like the setup still makes sense, but the user is allowed/able to fix stuff on their own, either using the same method, or just doing it and then having etckeeper or similar running to record the change and codify it afterwards.
Working on my RIIR version of unrar. The fun part is that RAR is actually three incompatible formats, and while the most recent revision is well documented, the older ones are kind of defined by the unrar source code and the second one in particular is messy and full of minor revisions and “deprecated” behavior.
I got it to parse all the the archive headers, which means that I can list the files inside all three format versions!
I’m getting a Wii from marketplace tomorrow. I’ve tried emulating wii games on Dolphin but those that require motion controls are just waaaay too fiddly to play without the proper controllers. I’m really looking forward to playing some games I’ve been missing out on because of the motion controls. I’m also considering getting a Dolphin Bar to get the best of both worlds, now that I’m gonna have some original wii controllers to use it with.
I also started using my split ergo keyboard full time, because some of the cheap gateron blue knockoffs on my old 30 euro keyboard are not registering well anymore, and I really don’t feel like desoldering them… I think I’ll steal some ideas from the thumb cluster post that’s on the front page right now because the weird layout I hacked together is not working very well for me.
Xe switched to split keyboards years and years ago and will never look back. Hoping they work out great for you too!
Taking a break until Wednesday. I just came back from vacation with friends and I decided I’ll give myself a day of buffer between coming home from a vacation and going back to work from now on. I guess I’ll play some videogames and maybe try to write a blog post about something. I’ve been wanting to write more so I should just start at some point.
At work leadership decided to switch the teams around (again) and I’ll be starting to work on a new team which will basically pick up the slack on customer bugs/requests for other teams. I did not ask for this change of teams and focus and very much liked the previous team so I’m kind of bummed out about this whole thing. I’ve come around to thinking that maybe it’ll be nice to interact with more people and get to know the whole codebase and the product much better, but I’m worried that it’ll be very stressing.
Was expecting the obligatory shitpost, and there it is elsewhere in the comments (I’m not referring to the CVE itself).
Yes, safer languages are better. Yes, it’s desirable to rewrite / port to them from C. No, C is not a good language to target in 2024, for most purposes (/me eyes his Amstrad CPC in the corner).
However I don’t think the C shitposts add much value.
How much effort would it take to:
I believe this would be a project of comparable scale to Wayland, and that’s taken 16 years and still isn’t a complete replacement for many folks, despite several vendor attempts to serve it raw.
That’s not a criticism of Wayland: my point is that software takes time (and thus money; either sponsorship or donated time with associated opportunity costs). Lots of it. From 23 years ago:
https://www.joelonsoftware.com/2001/07/21/good-software-takes-ten-years-get-used-to-it/
I’m a big Rust advocate, but… we already solved this particular problem many years ago when we realized that the X server has a large attack surface, by not running it as root. Most native-Xorg systems should be running without root these days, and every Wayland system runs XWayland as non-root. The X protocol doesn’t have any privilege separation, so if you’re trying to fully containerize X11 applications you already need to run a dedicated XWayland instance in the container to lock down its privileges anyway… which makes this whole thing basically irrelevant.
The X.Org server is practically unmaintained outside of XWayland, so I don’t think we should be spending any time worrying about real or potential CVEs which don’t affect systems that already implement the X11 protocol securely.
It’s been a long time since I’ve looked inside X.org but I have a very strong feeling that, especially due to its age, it’s going to have a lot of patterns inside it that are both correct and not-borrow-checker-compatible. One of my personal rules for “porting” is that there’s two modes:
The only way, in my experience, that you can do a successful port is by very rigidly keeping everything as architecturally similar as possible and with feature parity. Second system effect is too strong and there’s a huge risk that you end up with a new piece of software that implements half of the old system, plus some new stuff, but doesn’t actually fully replace the original.
You didn’t say Rust, but I’m sure people are thinking it. My speculation (informed with having worked with multi-decade-old code many times) is that there would have to be either a massive re-architecture effort or vast swaths of
unsafe
code to move Xorg to Rust. And I’m also going to speculate that Xorg doesn’t have a formal test suite; even if it does, Hyrum’s Law applies here: every observable behaviour of Xorg is now “part of the API” that userland software somewhere is going to be relying on. If the rewritten version isn’t a drop-in replacement, there’s going to be enough users complaining about things being broken that there’s going to be a serious traction issue. Some of the code that talks to Xorg is going to be 30+ years old and while it won’t have a huge userbase… there’ll be enough for it to be a huge problem.For my taste, you’ve just expressed precisely why it’s both obtuse and irrational (ie borderline trolling) to post little quips about x.org being implemented in the “wrong” programming language.
I think we all agree what ‘correct’ means. If the code is correct, how come it has a vulnerability? I’d posit that it is in fact not correct. It might be possible to make it correct. We have tools to help with that.
The borrow checker is just one tool people can use to improve software quality. Another is to e.g. use fuzz testing or use program extraction from a proof assistant. Why not just qualify the label ‘correct’ then? There is a difference between suspecting a program correct, and having proven it.
Allowing the term is equivalent to accepting a proof sketch as a proof. We should have a higher standards by now.
Very fair. The point I was going for is that code can be correct while also behaving in a way that’s incompatible with e.g. Rust’s borrow checker. The borrow checker with the “many read XOR one mutable” rule is one approach for guaranteeing correctness but isn’t strictly the only way.
Yup that’s exactly what I meant by “ensure the safety features of that language worked in the face of any weird shit the old protocols and implementations might be doing”.
I should have made the implication clear: I believe there’d be a lot of such weirdness in that old and complex a codebase.
Yup :) Very deliberately so :) There are enough flamewars over that topic that I thought I’d keep it technology-neutral.
On the topic of “how much effort would it take to X”, I think a good joke here is that it might be possible that more effort has been put into writing apologetics for memory unsafety than solutions for memory unsafety!
Seriously though, how about we all just come clean and admit that the vast majority of open source work is done on an unpaid volunteer basis which by necessity of the global economy under which we all live, will only ever have an incredibly small amount of energy put into it at all, and so no matter what engineering task you’re talking about if it is non trivial it will probably take at least half a decade longer to complete than if it was developed professionally, with adequate resources and all that.
The Morello system under my desk isn’t running X.org, it’s running the KDE Wayland server, but the quantity of C code is similar. The entire graphics stack, from the in-kernel drivers, through the userspace ones and the display server, to userspace GUI apps, are all memory safe. A bug of this nature would crash the display server but would not trigger privilege elevation.
The only reason that this isn’t mainstream is that CPU vendors are not seeing demand. Arm would move their post-Morello architecture to final if one major partner demanded it. With zero rewrites, we could eliminate these vulnerabilities in one upgrade cycle if the industry chose to, with existing proven technology.
Rewriting in a safe language is a much larger project than Wayland. Some Wayland compositors are written in memory-safe languages but Wayland reuses a lot of code from X.org. A lot of the low-level bits of the graphics stack are shared.
Oh! That’s news to me. I thought one of the driving factors behind Wayland was that the X.org code was a PITA to maintain.
I suspect that was largely a retroactive justifications, but the places it’s true were typically higher up the graphics stack. All of the KMS / DRI code, for example, is shared.
Was this in response to a different post, perhaps one that was merged and deleted? It now appears as a reply to the CVE story itself, and I’m scratching my head at the term “shitpost” being applied here.
Not exactly a reply, more of a commentary on that kind of reply. Have edited to clarify, thanks for the question.
Sorry for that. For the record I meant “obligatory” as in “yes, Xe has posted about this one too”, but I see that it could be read in a serious manner. I agree with you that it would not be desirable to rewrite the X server in a safe language; I just thought it was funny in the moment.
Obligatory https://xeiaso.net/shitposts/no-way-to-prevent-this/CVE-2024-9632/
Continuing to try and get the webcam working on the Surface Pro 5. There’s some weird stuff going on in libcamera about signing shared libraries that doesn’t go very well with Nix’s build process. I think I found a fix for it and now I’m waiting for my desktop to rebuild two versions of webkitgtk so I can try it out. I really, really hope I don’t ever have to compile webkitgtk ever again in my life.
On Halloween night I’m going to a local electronic music festival to see Dean Blunt do a DJ set, which will probably be very weird, and then Mica Levi, Yaeji and some others. And then a few days of vacation with some friends. This time I took an extra day off work after I come back from the trip so I can have a good rest.
Waiting for my partner’s new (refurbished) Surface tablet to compile the
linux-surface
kernel, apparently… I thought this was going to take much less.Edit: it was taking almost 2 hours so I decided to kill it and build the kernel on my desktop PC. I should be able to make the tablet use the results by running
nixos-install
with the--option substituters <URL>
flag and serve the store withnix-serve
on the desktop.Well, now I’m here watching my desktop recompile webkitgtk because it depends on wireplumber, and wireplumber depends on libcamera, and libcamera does not work with the default compilation flags on the Surface and causes wireplumber to segfault, and wireplumber crashing brings down sound altogether… and I’m just wondering if it wouldn’t be better to leave Windows on this damn thing.
I found that improving typing speed improved my code, not because I got the code down faster, but because I was willing to type more. When I started programming, aged 7, I typed with two fingers and had to search the keyboard to find keys. I used single-letter variable names everywhere because typing longer ones was hard. I rarely wrote comments, because writing full English sentences was hard. By the time I was an adult and I typed far faster than I ever wrote with a pen, writing a meaningful variable name cost nothing but the value when I came to read the code was immense. Cost-benefit tradeoffs are very different when the cost drops to nearly zero.
This is the first version of this argument that has sounded convincing to me. “The importance of not typing slowly”.
Absolutely, and I’m noticing this now that I’m basically relearning how to type because I assembled a split keyboard and configured it with the colemak layout. I can type about 35wpm in monkeytype with only lowercase letters and spaces at the moment, but I haven’t committed all the symbols to memory yet so typing code is very slow.
I tried doing a full day of work with the new keyboard yesterday but it didn’t go well. I found that having to actively think about where all the symbols are and making more errors while typing makes trying out different approaches much less desirable, and it constrains my ability to reason about code simply because I’m holding one more thought process in my head that wasn’t there before.
Keep with it. It’s frustrating, but worth it in the end. This is your chance to shed any bad habits you picked up the first time you learned to type.
Definitely! I’m practicing every day and I started using the new keyboard almost exclusively when I’m off work. I think the split keyboard did a lot for my motivation. I really want to switch to it because it feels very comfortable to use (especially so with colemak) and, well, it’s a shiny new toy :) And since it uses QMK firmware I can even identify uncomfortable key combos or symbol positions and iterate to improve them, which triggers the tinkerer part of my brain.
There are adults who still do this. Anyone who wants their code to be accepted and adopted by other people should absolutely learn to touch type. It’s never too late.
I’d argue that my typing speed mostly helps me in communication and research rather than coding, since I spend a lot more time thinking than I spend typing, in most of my projects. As @eterps pointed out in a different thread, that’s also an essential part of developing software, but I want to make a slightly different point.
However, touch typing is still one of the most useful computer skills I ever acquired. It certainly allows me to communicate in messaging apps effectively because the cost of typing is very low for me. After all, IRC and flamewars in FOSS channels is where I practiced touch typing most. ;)
It also helps me use different devices. My current laptop has keyboard with a British layout, which includes some design decisions beyond my comprehension, such as the pipe character (
|
) only accessible via AltGr-Shift-L and a key for#
/~
above the Enter key. But I don’t actually care what is written on the keys because I don’t look at those keys anyway — my layout is set to US English with|
where it, arguably, should be; and#
,~
, etc., where I expect them to be.I agree with you about the importance of written communication in talking about software design. And I, too, have a non-English (Spanish in my case) laptop and changed the layout to US English, which took a little getting used to, but was fine. …And I also stand by my assertion that there are adults who I’ve seen with my own eyes use single-letter variable names everywhere because they don‘t know how to touch type.
I installed Xubuntu on my partner’s laptop a while ago and they’ve been very happy with the switch, save for when the time came to update.
They’ve been using this laptop from 2013 with 4GB of RAM and a Windows 10 installation on an HDD, which was atrociously slow. It took several minutes to reach the user select screen, and after logging in it was 5 more minutes for Windows to stop spinning the disk before the system was usable.
After trying to alleviate the pain a couple times I figured it was time to move the system to an SSD and, after figuring out my partner’s needs and with their consent, I also installed Xubuntu on the laptop. Haven’t heard any complaints other than the occasional snag with sleep/hibernation, but since the laptop rebooted in a couple seconds they weren’t a big issue.
The big pain point in my experience is still the updates: the latest dist-upgrade from 22.04 to 24.04 failed partway through for some reason I haven’t yet determined, and left them with an unbootable system. I had to come in with an Ubuntu USB and chroot into the installation to make it finish the upgrade and fix some packages that were uninstalled for some reason. Now the laptop works fine but they’re experiencing random freezes (probably due to the nouveau driver or some faulty hardware). I could probably fix it but we’re kind of using it as an excuse to get a newer and less bulky laptop, so I guess that worked out in our favour :)
The next laptop will also have Linux on it, but I’m gonna install an immutable distro this time.
I also recently had an issue with my NixOS laptop after an upgrade where it would freeze on high CPU loads and didn’t come back from sleep (my fault for setting the kernel version to latest, lol), and I urgently needed it to work for an event I was attending later that evening. I really appreciated that I could boot into the previous generation that still worked and resolve the issue later.
Maybe I’m just high on the Nix pill, but I think immutable distros are a huge improvement in usability for everyone using Linux. Things still fail from time to time, and being able to temporarily switch back to a system that still works until the issue is fixed is one of the missing pieces to bringing Linux to the masses, at least in my opinion.
That kind of rollback a bad upgrade thing can alternatively be handled at the filesystem level with, for example, zfs snapshots.
True, I’d just want it to be automated by default on every update.
I think this is reflective of a real culture difference. For someone who is used to systems languages, a dense bit vector is a standard tool, as unremarkable as using a hashmap. One of the reasons that optimization is so frustrating in javascript is that many other standard tools, like an array of structs, are just not possible to represent. It feels like working with one hand tied behind your back. When you’re used to being able to write simple code that produces efficient representations, it’s hard to explain to someone who isn’t used to that just how much harder their life is.
Another reason is that the jit is very hard to reason about in a composable way. Changes in one part of your code can bubble up through heuristics to cause another, seemingly unrelated, part of your code to become slower. Some abstraction method might be cheap enough to use, except on Tuesdays when it blows your budget. Or your optimization improves performance in v8 today but tomorrow the heuristics change. These kinds of problems definitely happen in c and rust too (eg), but much less often.
Even very simply things can be hairy. Eg I often want to use a pair of integers as the key to a map. In javascript I have to convert them to a string first, which means that every insert causes a separate heap allocation and every lookup has to do unpredictable loads against several strings. Something that I would expect to be fast becomes slow for annoying reasons that are nearly impossible to work around.
It’s definitely possible to write fast code in javascript by just turing-tarpitting everything into primitive types, but the bigger the project it gets the larger the programmer overhead becomes.
I definitely recognize the value of a familiar language though, and if you can get within a few X of the performance without too much tarpit then maybe that’s worth it. But I wouldn’t volunteer myself for it. Too frustrating.
That line also struck me in particular. Using a Uint8Array as a bitvector is definitely something I’ve thought of, if not even implemented, a few times in JS. As you said, JS seems to actively work against you when you’re trying to optimize your code in ways that would work in other languages, so it disincentivizes you from trying to reason about performance.
There’s a lot to be said about consistency in object representation and opimizations in “dynamic” languages. OCaml is renowned to be a fast language not because it does crazy whole-program optimizations (the compiler is very fast precisely because it does no such thing) or because its memory representation of objects is particularly memory efficient (it actually allocates a lot; even
int64
s are heap allocated!), but because it is straightforward and consistent, which allows you to reason about performance very easily when you need it.The thing about this is that it doesn’t even feel like it’s “optimization” – as much as non-pessimization of how one would store such data. A single block of memory seems straightforward, with the more complicated way being to have a bunch of separate heap allocated objects that are individually managed. That’s what ends up feeling nice to me about the languages JS is being contrasted with here: I actually just write the basic implementation that is a word-for-word reflection of the design and get a baseline performance right off the bat that lets me go for a long time (usually indefinitely) before having to consider ‘optimizing’. The productivity boost offered alongside not giving that seems to me like it would have to be really really high to be worth the tradeoff, and personally I haven’t felt like JS delivers on that.
That’s true, but there’s a big tradeoff in flexibility there. If you want to store these objects in continuous memory now your language needs to know how big these objects are and may be in the future, which effectively means you need a type system of some sort. If you want to move parts of this objects out of it then you need to explicitly copy them and free the original object, so either you or your language needs to know where these objects are and who they belong to. If you want to stack-allocate them now your functions also need to know how big these objects are, and if you want the functions to be generic you’ll have to go through a monomorphization step, which means your compiler will be slower.
I’m not saying these properties are not valuable, but the complexity needs to lie somewhere. JS trades the ease of not having to keep track of all of this with a more complicated and “pessimized” memory model, while languages that let you have this level of control are superficially much harder to use.
Makes sense, but I’ve just found that what I pay in each of those scenarios comes up when each of those concretely happens and is often / usually a payment that seems consistent with the requirement and can be done in a way that’s pertinent to the context (I tend to not try to address all of those upfront before they actually happen, at this point). However, the performance requirement tends to either be always present or even if not present just be a mark of quality or discovery regardless (eg. if I can just have more entities in my game in my initial prototype I may discover kinds of gameplay I wouldn’t have otherwise).
I think I don’t consider those tradeoffs you’re listing very negatively because the type system usually comes with the language I use and I consider this to be a baseline for various sorts of productivity and implementation benefits at this point (which seems to also be demonstrated in the case of JS by how much TypeScript is a thing – it’s just that TypeScript makes you write types but doesn’t give you the performance and some other benefits that would’ve come with them elsewhere). Similar with stack allocation and value types.
I’m not seeing how the other side of that tradeoff is ‘flexibility’ specifically though, because my experience is that the type system lets me refactor to other approaches with help from the language’s tooling, and so far the discussion has been about how the JS approach is actually inflexible regarding that.
Re: ‘the complexity needs to lie somewhere’ – the point is that in the base case (eg. iterating an array of structs of numbers) there is actually less essential complexity and so it may not have to lie anywhere. It’s just that JS inserts the complexity even when it wasn’t necessary. It seems more prudent to only introduce the complexity when the circumstances that require it do arise, and to orient the design to those requirements contextually.
All that said, I’m mostly just sharing my current personal leaning on things that leads to my choice of tools. I just feel like I wasn’t actually that much more productive in JS-like environments, maybe even less so. On the whole I’m a supporter of trying various languages and tools, learning from them and using whatever one wants to.
I agree with everything you said. I’m just not sure that we should be defaulting to writing everything in Rust (as much as I like it) or some other similar language. I think I wish for some sort of middle ground.
Agreed on that. I definitely don’t think that the current languages – on any of the points in this spectrum – are clear winners yet, or that it’s a solved space. I avoided mentioning any particular languages on the ‘more control’ side for that reason.
Aside: Lately I’ve been playing with C along with a custom code generator (which generates container types like
TArray
for each T and also ‘trait’ functions likeTDealloc
andTClone
etc. that recurse on fields) of all things. Far from something I’d recommend seriously for sure, just a fun experiment for side projects. It’s been a surprising combination of ergonomics and clarity. Turns out__attribute__((cleanup))
in GCC and clang allow implementingdefer
as well. Would be cool to get any basic attempt at flow-sensitive analysis (like borrow-checking) working in this if possible, but I’ve been carrying on with heavy use of lsan (runs every 5 seconds throughout app lifetime) and asan during development for now.Just chilling, going out, doing things in the real world… If I have some time to spare I’ll practice typing colemak-dh on the split keyboard (I got to letter P on keybr!) and maybe fix up the firmware for my Watchy.
Watchy seems pretty cool! Any recommendations on articles showing off what can be done with it, preferably recent? How’s your experience with it?
Not much that I remember! I’m in the middle of rewriting the default firmware with arduino-cli but I’m mostly planning to use it as a normal watch. I’ve seen other alternative firmware with more features but afaik it’s mostly WIP. I think it would be cool to use it to get notifications from a phone with GadgetBridge, and apparently somebody’s implemented something similar for another ESP32 device, but it hasn’t really happened yet.
I was also in the middle of reimplementing the firmware and drivers in Rust, but LLVM and rustc both need significant patches to support the Xtensa arch (which is what runs on the ESP32) and trying to build that on NixOS has been challenging… so for now I shelved that project until the Xtensa patches have been merged upstream (hopefully sometime next year).
I finally got the split keyboard all set up. Got the replacement microcontroller in the mail this morning and then spent the lunch break soldering it to the right half of the keyboard (with socketed pin headers this time). I was very glad to discover that the PCB came out of this ordeal completely unscathed! So I guess I’ll spend this saturday relearning how to type and customizing my layout :)
Have a picture. (Keycap location for the right half is a WIP and does not represent the final look of the keyboard.)
Super nice keyboard and overall aesthethic. I have a split 40% as part of my quiver of keyboards.
I’ve been spoiled by the clicky d-pad on the xbox controller so now I am very tempted to get the clicky face buttons mod…
At work: continuing my investigation into why it’s taking so long to upload large files into our system. I almost hope I get to do some gnarly performance work and refactoring this time because the last part turned out just to be long held database locks, and the solution was to wait for my colleague to optimize another piece of code.
In the spare time I’m making some Nushell wrappers for all the various things I interact with at work (the usual stuff: Jira, Google Cloud, Kubernetes, GitLab) because… I just prefer the command line, and very often there’s an annoying amount of slow web pages and bad autocomplete and copying and pasting I have to go through just to get to the logs of the pods I just deployed, or to go through a ticket I’m working on, and so on. So now, little by little, I’m trying to automate all that annoying stuff away by piping some APIs together.
At home: waiting for a replacement microcontroller for my split Sofle keyboard to arrive. I killed the original one while desoldering it because I realized I was reading the wrong assembly instructions for the kit that I had and the microcontroller was supposed to be assembled facing the other way. I got socketed pin headers this time. I’m planning to put it back together and test whether the PCB still works fine, and if not I get to wait another week for a (discounted) replacement kit to arrive.
I feel very much sorry for all users of those tools we have to wrap with command line scripts. But I do enjoy the scripting and command line as much as you probably. I wish you lot of fun automating this ;-) , I also feel more productive in the terminal so I try to reach most of them through the terminal if possible :-) or automate things so they remain out of my sight.
Thanks! I think we definitely think alike in this regard. I also feel much more productive in the terminal, and I love nushell in particular because of how easy it is to pipe stuff that speaks JSON together.
Today I discovered that Jira’s API is much more easily easily accessible than I thought (you just need to generate an API token and use HTTP basic auth) and I made a simple command to view my assigned open issues in a table with some info and links. It’s not much, but I enjoyed it :)
speaking of, is this your first DIY split ? Do you have experience with homebrew electronics in general? Asking bc I’d like to get myself a fancy split kbd but a bit intimidated by the assembly and parts picking.
Yes it’s my first DIY split and no I don’t have much experience with homebrew electronics, I only had a bit of experience with soldering before this. I was planning to make a blog post at the end of the process but this is the gist of it:
Assembly is really not that hard as long as you do some preparation first. The hardest part for me was soldering (and then desoldering ;_;) the microcontroller, and if you get socketed pin headers you’ll make your life a lot easier. Good luck!
Thanks a lot for all the tips!
while we’re at it: I notice most (all?) the split keyboards have a cable connecting the two halves. Are you aware of a variant with a wireless link? I imagine bluetooth would add some latency but I’m not a supersonic typist anyway and I would appreciate having fewer cables lying around the desk.
No idea! I personally prefer cables so I haven’t looked into it.
I suppose that since most split keyboards have one microcontroller per half and they can operate independently, you could theoretically put BT modules on both halves and connect them to the PC as two separate devices, but then if you activate a layer on one half it will only affect that half. But again, I haven’t looked into it so maybe there’s other options.
Finishing the second half of my ergonomic mechanical keyboard, with any luck! I got a Sofle keyboard kit and managed to solder and assemble half of it just fine, I tried it and it all works.
I assembled the second half as well, but managed to mess something up while soldering (and also forgot to bridge the OLED just under the microcontroller, which is a massive pain) and now I get to figure out what’s wrong with it with very little electronics knowledge, yay!
I don’t see anything obviously wrong with how I soldered it so I’m probably gonna go ask in the kit vendor’s discord and present my findings. Hopefully I can get that done before the end of the weekend.
Ok I found the issue: the microcontroller in the right half of the keyboard needs to be mounted upside down in V2 of this design, which is what I have… I ordered a desoldering pump.
I guess you might’ve read this now but it’s upside down because the two sides use the exact same design, and you just flipped one. That’s so that when ordering PCBs for personal use, you don’t need
2 * $min-order-size
, just one batch is enough!I have a sofle too though bought it second-hand, so pre-built. I found adjusting to columnar keys harder than the split. Hope you’ll like it!
EDIT: and in case you haven’t read it yet, don’t mess with the TTRS cable while powered to avoid shorts :)
Yup, I figured! The problem is the vendor only had a guide for the V1, and I didn’t bother to look up the guide for V2… When I was assembling the left half I thought the jumper pads were somehow in charge of telling the PCB which side the microcontroller was on, but as it turns out, it does that but only for the OLED screen pins!
Yeah, at first I diagnosed the problem to be the TRRS jack because when I tested for continuity two of the contacts were bridged, but in retrospect the one that should not have been ground must’ve been connected to the pin which, when you flip the microcontroller, becomes the second GND pin. I also read up on the jack’s contacts and all the advice about not hotplugging the TRRS jack while the keyboard is on suddenly made a lot of sense :)
Anyway, progress report: after a lot of pain I managed to pry away the microcontroller, but I must have fried it in the process because it doesn’t respond anymore when I connect it to my PC. On the other hand, I learned how to properly use flux and the desoldering wick, and I gained a new appreciation for socketed pin headers, which I ordered along with a new microcontroller. I really, really hope that the PCB still works fine.