1. 10

What are you doing this week? Feel free to share!

Keep in mind it’s OK to do nothing at all, too.

    1. 5

      Planning the next release of https://bupstash.io/ / https://github.com/andrewchambers/bupstash . I am way behind schedule but want to do a closed beta of managed repositories.

      Also thinking a bit about a new programming language I want to work on for fun - A combination of standard ML drawing heavy inspiration from the https://github.com/janet-lang/janet runtime.

      I also have some ideas kicking around for my peer to peer package tree - https://github.com/andrewchambers/p2pkgs .

      So many things to do - clearly I need to avoid spending time on the less important things - I just have trouble reigning in what my mind is currently obsessing over.

      1. 2

        Also thinking a bit about a new programming language I want to work on for fun - A combination of standard ML drawing heavy inspiration from the https://github.com/janet-lang/janet runtime.

        Do you mean you want to reimplement StandardML but on top of Janet’s like runtime? Or is there something specific to Janet which can influence the SML language itself?

        I’m myself contemplating a compile to LuaJIT ML-like language: the efficiently of LuaJIT and its convenient FFI + ergonomics of ML (though I’d want to experiment with adding modular implicits to the language).

        I also have some ideas kicking around for my peer to peer package tree - https://github.com/andrewchambers/p2pkgs .

        Is this related to Hermes (sorry for lots of questions but you have so many interesting projects)? Are you still using/developing it?

        Some time ago I was working on designing and implementing esy which is a package manager + meta build system (invoking package specific build system in hermetic environments) for compiled languages (OCaml/Reason/C/C++/…). It looks like Nix but has integrated SAT solver for solving deps, we rely on package.json metadata and npm as a registry of package sources (though we can install from git repos as well).

        Personally, I think there’s a real opportunity to make a “lightweight” version of Nix/Guix which could be used widely and Hermes seems to be aimed to at this exact spot.

        1. 1

          Do you mean you want to reimplement StandardML but on top of Janet’s like runtime? Or is there something specific to Janet which can influence the SML language itself?

          The way janet has great CFFI and a few things like compile time evaluation and the language compilation model. I also enjoy how janet can be distributed as a single amalgamated .c file like sqlite3. My main criticism of janet is perhaps the lack of static types - and standard ML might be one of the simplest ‘real’ languages that incorporates a good type system, so I thought it might be a good place to start for ideas.

          I’m myself contemplating a compile to LuaJIT ML-like language: the efficiently of LuaJIT and its convenient FFI + ergonomics of ML (though I’d want to experiment with adding modular implicits to the language).

          Yeah, the way it complements C is something I would love to capture. I am not familiar with modular implicits at all - but it sounds interesting!

          Is this related to Hermes (sorry for lots of questions but you have so many interesting projects)? Are you still using/developing it?

          Yes and no - p2pkgs is an experiment to answer the question - ‘what if we combined ideas from Nix with something like homebrew in a simple way?’ I think the answer is something quite compelling but it still has a lot of tweaking to get right. p2pkgs uses a combination of a traditional package model - so far less patching is needed to build packages - it also is conceptually easier to understand than nix/hermes - while providing a large portion (but not all) of the benefits. You could consider p2pkgs like an exploratory search of ideas for ways to improve and simplify hermes. The optional p2p part was kind of an accident that seems to work so well in practice that I feel it is also important in it’s own way.

          1. 1

            while providing a large portion (but not all) of the benefits

            Could you possibly elaborate on which benefits are carried over, and which are not? I’m really interested in your explorations in this area of what I see as “attempts to simplify Nix”, but in this particular case, to the extent I managed to understand the repository, it’s currently very unclear to me what it really brings over just using redo to build whatever packages? Most notably, the core benefits I see in Nix (vs. other/older package managers), seem to be “capturing complete input state” of a build (a.k.a. pure/deterministic build environment), “perfectly clean uninstalls”, and “deterministic dependencies” including the possibility of packages depending on different versions of helper package. Does p2pkgs have any/all of those? It’s ok if not, I understand that this is just a personal exploration! Just would like to try and understand what’s going on there :)

            1. 2

              seem to be “capturing complete input state” of a build (a.k.a. pure/deterministic build environment)

              Yes it does, builds are performed in an isolated sandbox and use none of the host system.

              “perfectly clean uninstalls”, and “deterministic dependencies” including the possibility of packages depending on different versions of helper package.

              Packages are currently used via something I called a venv, this is more like nix shell, so has clean uninstalls - each venv can use different versions of packages, but within a venv you cannot - this is one of the downsides.

              it’s currently very unclear to me what it really brings over just using redo to build whatever packages?

              It uses redo + isolated build sandboxes + hashing of the dependency tree in order to provide transparent build caching, this is not so far removed from NixOS, which is why i feel nixos might be over engineered.

              One thing p2pkgs does not have is atomic upgrades/rollback unless it is paired with something like docker.

              All that being said, I think i oversimplified it to the point where the UX is not as good as it should be, so i hope to shift it back a bit to look a bit more like the nix cli - I think that will make things more clear.

              1. 1

                Thanks! I was confused how redo works (had some wrong assumptions); now I start to understand that the main entry point (or, core logic) seems to be in the pkg/default.pkg.tar.gz.do file. I’ll try to look more into it, though at a first glance it doesn’t seem super trivial to me yet.

                As to venv vs. NixOS, does “a linux user container” mean some extra abstraction layer?

                Also, I don’t really understand “container with top level directories substituted for those in the requested packages” too well: is it some kind of overlayed or merged filesystem, where binaries running in the container see some extra stuff over the “host’s” filesystem? If yes, where can I read more about the exact semantics? If not, then what does it mean?

                Back to “input completeness”: could you help me understand how/where can I exactly see/verify that e.g. a specific Linux kernel version was used to build a particular output? similarly, that a specific set of env variables was used? that a specific hash of a source tarball was used? (Or, can clearly see that changing one of those will result in a different output?) Please note I don’t mean this as an attack; rather still trying to understand better what am I looking at, and also hoping that the “simplicity” goal would maybe mean it’s indeed simple enough that I could inspect and audit those properties myself.

                1. 1

                  As to venv vs. NixOS, does “a linux user container” mean some extra abstraction layer?

                  Like Nixos, it uses containers to build packages, they are not needed to use packages - but they are helpful

                  Also, I don’t really understand “container with top level directories substituted for those in the requested packages” too well: is it some kind of overlayed or merged filesystem, where binaries running in the container see some extra stuff over the “host’s” filesystem? If yes, where can I read more about the exact semantics? If not, then what does it mean?

                  The build inputs are basically put into a chroot with the host system /dev/ added with a bind mount - this is quite similar to nixos - You can see it in default.pkg.tar.do

                  Back to “input completeness”: could you help me understand how/where can I exactly see/verify that e.g. a specific Linux kernel version was used to build a particular output?

                  Nixpkgs does not control the build kernel, not sure why you seem to think it does. Regardless - You can run redo pkg/.pkghash to compute the identity of a given package - which is the hash of all the build inputs including build scripts - again, much like nix. I suppose to control the build kernel we could use qemu instead of bwrap to perform the build. To see the inputs for a build you can also inspect the .bclosure file which is the build closure.

                  similarly, that a specific set of env variables was used? that a specific hash of a source tarball was used?

                  Env variables are cleared - this can be seen by the invocation of bwrap which is a container - much like nixpkgs. I get the impression you might be misunderstanding the trust model of NixOS - nixos lets you run package builds yourself - but it still relies on signatures/https/trust for the binary package cache - You can’t go back from a given store path and workout the inputs - you can only go forward from an input to verify a store path.

                  also hoping that the “simplicity” goal would maybe mean it’s indeed simple enough that I could inspect and audit those properties myself.

                  The entire implementation is probably less than 700 lines of shell, i think you should be able to read them all - especially default.pkghash.do and default.pkg.tar.gz.do .

                  1. 1

                    Thank you for your patience and bearing with me! I certainly might misunderstand some things from NixOS/Nixpkgs - I guess it’s part of the allure of p2pkgs that their simplicity may make those things easier to understand :) Though also part of my problem is that I’m having trouble expressing some things I’m thinking about here in precise terms, so I’d be super grateful if you’d still fancy having some more patience to me trying to continue searching for better precision of expression! And sorry if they’re still confused or not precise enough…

                    Like Nixos, it uses containers to build packages, they are not needed to use packages - but they are helpful

                    Hm; so does it mean I can run a p2pkgs build output outside venv? In Nixpkgs, AFAIU, this typically requires patchelf to have been run & things like make-wrapper (or what’s the name, I seem to never be able to remember it correctly). (How) does p2pkgs solve/approach this? Or did I misunderstand your answer here?

                    The build inputs are basically put into a chroot with the host system /dev/ added with a bind mount - this is quite similar to nixos - You can see it in default.pkg.tar.do

                    What I was asking here about was the “Running packages in venv” section - that’s where the “container with top level directories substituted (…)” sentence is used in p2pkgs readme. In other words: I’m trying to understand how during runtime any “runtime filesystem dependencies” (shared libraries, etc.; IIRC that’d be buildInputs in nixpkgs parlance) are merged with “host filesystem”. I tried reading bwrap’s docs in their repo, but either I couldn’t find the ultimate reference manual, or they’re just heavily underdocumented as to precise details, or they operate on some implicit assumptions (vs. chroot? or what?) that I don’t have.

                    In other words: IIUC (do I?), p2pkgs puts various FHS files in the final .pkg.tar.gz, which do then get substituted in the chroot when run with venv (that’s the way buildInputs would be made available to the final build output binary, no?). For some directory $X present in .pkg.tar.gz, what would happen if I wanted to use the output binary (say, vim), run via venv, to read and write a file in $X on host machine? How does the mechanism work that would decide whether a read(3) sees bytes from $X/foo/bar packed in .pkg.tar.gz vs. $X/foo/baz on host machine’s filesystem? Or, where would bytes passed to write(3) land? I didn’t manage to find answer to such question in bwrap’s docs that I found till now.

                    Do I still misunderstand something or miss some crucial information here?

                    Nixpkgs does not control the build kernel, not sure why you seem to think it does. (…)

                    Right. I now realize that actually in theory the Linux kernel ABI is stable, so I believe what I’m actually interested here in is libc. I now presume I can be sure of that, because the seed image contains gcc and musl (which I currently need to trust you on, yes?), is that so?

                    Env variables are cleared (…)

                    Ah, right: and then any explicitly set env vars result in build script changes, and then because it’s hashed for bclosure (or closure, don’t remember now), which is also included in (b?)closures of all dependees, the final (b?)closure depends on env vars. Cool, thanks!!

                    1. 1

                      Hm; so does it mean I can run a p2pkgs build output outside venv? In Nixpkgs, AFAIU, this typically requires patchelf to have been run & things like make-wrapper (or what’s the name, I seem to never be able to remember it correctly). (How) does p2pkgs solve/approach this? Or did I misunderstand your answer here?

                      It replaces /bin /lib but keeps the rest of the host filesystem when you run the equivalent to a nix shell. This seems to work fine and lets you run programs against the host filesystem. This works because on modern linux kernels you can create containers and do bind mounts without root.

                      If we designed a package installer tool (and a distro?), it would also be possible to just install them like an alpine linux package.

                      I now presume I can be sure of that, because the seed image contains gcc and musl (which I currently need to trust you on, yes?), is that so?

                      You can rebuld the seed image using the package tree itself, the seed image is reproducible so you can check the output seed is the same as the input seed. You need to trust me initially though before you produce your own seed.

                      Ah, right: and then any explicitly set env vars result in build script changes, and then because it’s hashed for bclosure (or closure, don’t remember now), which is also included in (b?)closures of all dependees, the final (b?)closure depends on env vars. Cool, thanks!!

                      Thats right :).

        2. 1

          Some time ago I was working on designing and implementing esy which is a package manager + meta build system (invoking package specific build system in hermetic environments) for compiled languages (OCaml/Reason/C/C++/…). It looks like Nix but has integrated SAT solver for solving deps, we rely on package.json metadata and npm as a registry of package sources (though we can install from git repos as well).

          I feel like it should be possible to use the same solver ideas or something like go MVS in order to make a distributed package tree - this is another idea I really want to try to integrate in something simpler than Nix. I agree that it seems like a great thing and I definitely want it to be built.

          edit: I will investigate esy more - it definitely has most of what I want - The big difference seems to be how p2pkgs simply overrides / using user containers and installs them using DESTDIR.

          1. 1

            I feel like it should be possible to use the same solver ideas or something like go MVS in order to make a distributed package tree

            The depsolver is an interesting beast. I’m not satisfied with how it ended up in esy (though we had constraints to operate within, see below) — the feature I miss the most is the ability to create a separate “dependency graph” for packages which only expose executables (you don’t link them into some other apps) — dependencies from those packages shouldn’t impose constraints outside their own “dependency graphs”.

            Ideally there should be some “calculus of package dependencies” developed which could be used as an interface between depsolver and a “metabuildsystem”. That way the same depsolver could be used with nix/hermes/esy/… Not sure how doable it is though — people don’t like specifying dependencies properly but then they don’t like to have their builds broken either!

            edit: I will investigate esy more - it definitely has most of what I want - The big difference seems to be how p2pkgs simply overrides / using user containers and installs them using DESTDIR.

            Keep in mind that we had our own set of constrains/goals to meet:

            • esy is usable on Linux/macOS/Windows (for example we ship Cygwin on Windows, this is transparent to the users)
            • esy uses npm as a primary package registry (appeal to people who know how to publish things to npm, an open world approach to managing packages)
            • esy doesn’t require administrator access to be installed/used (the built artefacts are inside the home directory)
            • esy strives to be compatible with OCaml ecosystem thus the depsolver is compatible with opam constraints and esy can install packages from opam
            1. 1

              The depsolver is an interesting beast. I’m not satisfied with how it ended up in esy (though we had constraints to operate within, see below) — the feature I miss the most is the ability to create a separate “dependency graph” for packages which only expose executables (you don’t link them into some other apps) — dependencies from those packages shouldn’t impose constraints outside their own “dependency graphs”.

              This is much like in a general package tree - statically linked programs really don’t care - but some programs don’t support static linking, or provide dynamic libraries.

              Another challenge is when you don’t have a monolithic repository of all packages you now have double the versioning problems to tackle - Each version is really two - the package version and the packaged software version.

              My goals for a general package tree are currently:

              • Linux only (it worked for docker).
              • Doesn’t require administrator for building or using packages.
              • Allows ‘out of tree packages’ or combining multiple package trees.
              • Transparent global build caching from trusted sources (like nixos, I don’t think esy has this).
    2. 4

      Move all my servers to NixOS, tweak my Emacs from scratch setup, work on my personal assistant (written in Rust). Excited for all of it!

    3. 3

      Every month I pour half a day into ‘the devils threesome’ project - which is a laptop with nvidia-intel bumblebee and an amdgpu eGPU and work on the code for transparently switching between the three, migrating GL contexts in the progress. With November beginning, it is that time again.

      1. 2

        That sounds really cool and super interesting. Is the code side of your work opensource?

        1. 1

          (Most) of the code for that part of the project goes into this monster: (platform/egl-dri).

    4. 2

      Finally getting back to working on Octobox, I’m building a browser extension for GitHub that will enable closer integration and faster triaging of issues and pull requests: https://github.com/octobox/extension

    5. 2

      Paternity leave over, back in the busy day to day. Side-wise, I hope I’ll manage to figure out a docker/nyxt issue with video crashing, that is incredibly out of my comfort zone.

    6. 2

      Festival (Diwali) week here and Reaper (Cradle 10) releasing tomorrow, so I won’t get much done work wise.

    7. 2

      I’m in Toronto for work!

      1. 1

        I love Toronto, such a surprisingly good city for food!

    8. 1

      Finding a new job where I wouldn’t be working on a team of outsourced Indians. Seriously, it’s one of the worst aspects of doing any software development for living.

    9. 1

      For a client, continuing to port a custom query engine on a key-value store for metrics to ClickHouse.

      Otherwise finishing up support for hosted dashboards and email exports in DataStation.

    10. 1

      Finished reading The Denial of Death and now I’m diving into Freud and, hopefully soon, Otto Rank. Also need to bone up on CIDR just because it’s something that still confuses me. But thankfully we have W. Richard Stevens unparalleled work to reference :)

    11. 1

      Having my first in-person meeting with my SHOP coworkers. I’m also on call, so I get to learn a whole bunch about our pipelines and whatnot that as a manager I’ve been able to gloss over. That’s good, even if I don’t want to do it.

    12. 1

      Getting back to work after some time off to recover from surgery!

    13. 1

      Work stuff continues apace.

      On the side, I plan on trying to port my old game jam games in Love2D to use love.js. So far, I have pretty solid efforts on two of them. The end goal is to have a collection of games on itch.io so I can show friends and co-workers. That being said, their game-jam-rawness is really evident, lol. I will be putting disclaimers on them, lol.

    14. 1

      I made good progress on my Lisp bindings to H3 over the weekend, but there’s still a lot to do. I’m currently porting tests from the Java and Python bindings into Common Lisp, and cleaning up, refactoring, and documenting. I also need to setup CI with GitHub actions.

      Also keeping an eye out for interesting jobs to apply for.

    15. 1

      Finally restarted a blog for professional purposes, hopefully I’ll get to write more!

    16. 1

      Working on a new blog post and developing new features for telescope-repo.nvim, based on recent user discussion.

    17. 1

      This week I’m doing PagerDuty incident commander training, and working with a colleague on a proposal to remove an unbound queue as $service’s primary interface, because queues don’t fix overload.

      Outside of work it’s much the same as always: playing guitar, reading (currently Martha Wells’ Murderbot Diaries), exercising, & keeping my 9yo son alive.

    18. 1
      • Going to see Tim Minchin live for the first time in over a decade (for me, not him)
      • Landing some stuff at work that’s been a few weeks in the making
      • Reinstalling NixOS so it’s striped across all the disks in the server, not just the two I had plugged in last time I installed it (oops)
      • Putting an actual workload on said server so it’s not just freeloading electric off me