i usually post random stuff i’m struggling with into random slacks i’m in in relevant channels but yeah, it doesn’t feel as rewarding as when working on a shared goal/project
The sqlite cinematic universe is so fascinating to me. It’s undeniably very good software engineering, the test suite for sqlite alone is a marvel. But then… that test suite is written in TCL. The sqlite library code is incredibly portable and used on more devices than almost anything. And yet.. the build system is this weird amalgamation thing no one else does. The code management seems fine. But they use this unique home-grown DVCS.
I’m not trying to say all this stuff is wrong. Clearly it’s working great for the sqlite product. But then why doesn’t anyone else do the same thing?
If your project reaches sqlite levels of success, I say just keep doing what you’re doing. How many projects using “normal” build/distribution methods are as portable and ubiquitous as sqlite? When git started, it was a unique home-grown DVCS too. (Inspired by Bitkeeper, sure, but not identical.)
I really don’t want to be this grumpy old man, but.. can we just stop comparing this two, completely unrelated languages? Where are all the Zig vs Haskell posts? It makes just as much sense. Go is simply not a low-level language, and it was never meant to occupy the same niches as Rust, that makes deliberate design tradeoffs to be able to target those use cases.
With that said, I am all for managed languages (though I would lie if I said that I like Go - would probably be my last choice for almost anything), and thankfully they can be used for almost every conceivable purpose (even for an OS, see Microsoft’s attempts), the GC is no problem for the vast majority of tasks. But we should definitely be thankful for Rust as it really is a unique language for the less often targeted non-GC niche.
You should understand the context a little bit: Shuttle is a company focused on deploying backend applications in Rust. Backend applications (server programs) is precisely the main use-case for Go. It absolutely makes sense for them to compare the two in this context.
Java is an even more common choice for backend development, but fair enough.
My gripe is actually the common, hand-in-hand usage of this “rust’n’go” term, which I’m afraid is more often than not depends on a misconception around Go’s low-levelness, as it was sort of marketed as a systems language.
It is a “systems language”… by the original definition where low-levelness was a side-effect of the times and the primary focus was long-term maintainability and suitability for building infrastructural components.
(Which, in the era of network microservices, is exactly what Go was designed for. Java, Go, and Rust are three different expressions of the classical definition of a “systems language”.)
That said, there has been a lot of misconception because it’s commonly repeated that Go was created “to replace C++ as Google uses it”… which basically means “to replace Python and Ruby in contexts where they can’t scale enough, so you used C++ instead”.
BTW Rob Pike is cited in it where he regrets calling Go a systems language, but that’s probably because he used it in the original sense, and the meaning has drifted.
Go is simply not a low-level language, and it was never meant to occupy the same niches as Rust, that makes deliberate design tradeoffs to be able to target those use cases.
Go was originally designed as a successor to C. The fact that it’s been most successful as a Python replacement was largely accidental. It does (via the unsafe package) support low-level things but, like Rust, tries to make you opt in for the small bits of code that actually need them.
Go was originally designed as a successor to C. The fact that it’s been most successful as a Python replacement was largely accidental. It does (via the unsafe package) support low-level things but, like Rust, tries to make you opt in for the small bits of code that actually need them.
Go had three explicit, publicized design goals from the start; none of them were “be a successor to C”.
They were written relative to the three languages in use at google:
Expressive like c++/python
Fast like java/c++
Fast feedback cycles (no slow compile step) like java/python
Go had three explicit, publicized design goals from the start; none of them were “be a successor to C”.
Rob Pike was pretty explicit that it was intended as a C successor. He said this several times in various things that I heard and read while writing the Go Phrasebook. His goal was to move C programmers to a language with a slightly higher level of abstraction. He sold it to Google relative to their current language usage.
I think the worst part of the surprise came from the intersection of docker and ufw. I’ve gotten used to using ufw on ubuntu hosts. And the way that docker changed the firewall rules did not trigger any report in the ufw status verbose command.
Also, I’d have caught this in my own testing before I invited anyone to use my web app. Because I always do check my database and app server from a machine that shouldn’t be able to connect to them. But I’m impressed that the internet found its way in before I got ’round to checking that.
Good for them, but this is like… the exact opposite of what I want. XD
And I’m not expert but considering that margins are low and competition is high at the low-end, this seems like a poor strategic move. Hopefully Google is paying them well for it.
This is like making a ceramic Solo cup. Who wants a high-end Chromebook? Who wants a Chromebook that can be serviced easily? The whole point of Chromebooks is that they are disposable and have no local state.
I imagine there are people who like Chromebooks and do not think computers should be treated as disposable. I don’t know that there are many, but that’s because I don’t get Chromebooks.
I think it’s not about” disposable”, it’s about “replaceable”. I had one of the first Chromebooks and for non-dev work I loved it. I could use all my favourite websites, read all the articles, do most of the stuff I do on a computer when I’m not working. And when coupled with the (then unofficial) SSH extension, i could even do some light work in vim. Not real dev work, but some tests and trying things out and fiddle with the servers.
Think of it as an advanced tablet. It’s not useless, but it’s not usable for everything, at least not as a primary computing device. I almost never had a tablet, but i liked my Chromebook.
And as such, why not aim for high quality product and high end market? People but iPads and surfaces and I never would, and I don’t think ms or apple or their customers are stupid.
The whole point of Chromebooks is that they are disposable and have no local state.
That was certainly the original sales pitch, but there’s a concerted effort away from that in the past few years. The new pitch is that Chromebooks allow you to give an always-up-to-date, easy-to-lock-down tool to everyone in your company/school that, when damaged, loses no data. Throw in their ability to run Wayland and Linux, and I’ve seen a couple of full-on web dev companies that give all their employees powerful Chromebooks, rather than MacBooks.
Do I want that? Hell no. But But apparently some people do—and since “some people” seems to overwhelmingly be large school districts and massive companies, there is money to be had there.
I’m not seeing those types of places wanting to buy a trivially hackable laptop, though…
I’m also baffled by this. Who are these people that will pay a premium for full control over their hardware but don’t care at all about controlling their software?
I would not classify the Framework laptop as “Free Hardware,” but would say that some people want to be able to lifecycle their machines forever, no matter the OS. I further doubt it would be hard to install Linux on the mainboards on these Frameworks.
Putting on my tinfoil hat, this was a part of the long-term bet of Google getting Chromebooks into the education system. Now with children growing up and indoctrinated into the Google ecosystem, they’ll find this just a hardware transition lessoning the burden of a software transition too—especially with all their data in the cloud. This is probably why I see a lot of younger folks here just using iPads and aren’t sure how to handle when there isn’t an app for that (sent an AsciiDoc file and there was no way to ‘open’ it).
This is another reason why FxOS was too ahead of its time. Boot2Gecko being in the hands of children would make me feel a lot safer than its Google alternative.
This is a nice writeup. I never took the time to capture things this way when I was doing more ops/debugging work. Will definitely keep it as an example for myself and others :D
What a crappy writeup… the amount of time given to fossil does it no justice. I’ve been using it locally for a few months now and hosting multiple repos and I’ve had such a good experience. So easy to run from a jail on my TrueNAS and even add offsite backups to because all the projects sit in the same directory with a single file each. For open source I think you could easily do some sort of “fossil localdev to public git” flow. The focus on windows/WSL is also annoying but I suppose it allows the whole post to be dismissed for folks who use neither. Hopefully the mention of all the different projects sparks folks’ interest. I think it’s fun to tinker with using different VCS tools.
The focus on windows/WSL is also annoying but I suppose it allows the whole post to be dismissed for folks who use neither.
Windows compatibility is really interesting: it’s important for a lot of users but not something a lot of developers have the computers, interest, or even awareness to match. Anything that wants to seriously compete with git would need to run natively on windows without wsl.
But Fossil does (it’s a single drop-in executable via either winget or scoop), and Git not only runs fine on Windows; it’s the default SCM that Microsoft internally uses these days. This would be like evaluating Subversion on Linux by running TortoiseSVN on WINE.
Git runs on Windows but it’s not “fine” - it’s painfully slow and a little finicky to setup.
Personally it doesn’t bother me enough to run through WSL, but I’ve heard people suggest it.
It’s slow enough that for big operations I’ll occasionally switch over to the shell and manually run git commands instead of using Magit, because Magit will often run multiple git commands to get all of the information it needs, and it just slows down too much.
Context for this response: I did Windows dev professionally from 2005 to 2014, and as a hobbyist since then. Half the professional time was on an SCM that supported both Mercurial and Git on Windows.
Git runs on Windows but it’s not “fine” - it’s painfully slow and a little finicky to setup.
Setup is literally winget git (or scoop git, if you’re weird like me), aaaand you’re done–or at least in the same git config --global hell everyone on Linux is in. Performance has been solid for roughly four years if you’re on an up-to-date Git. It’s not up to Linux standards, because Git relies on certain key aspects of Unix file and directory semantics, but there are even official Microsoft tools to handle that (largely through the Windows equivalent of FUSE).
Running anything through WSL on files will perform atrociously: WSL1 is slower than native, but mostly okay, but WSL2 is running on p9, so you’re doing loopback network requests for any file ops. I do run Git in WSL2, but only when working on Linux software, where it’s running natively on Btrfs. You’re trying to lick a live propeller if you use WSL2 Git on Windows files.
I have zero experience with magit on Windows because Emacs on Windows is, in my opinion, way too painful to deal with. I love Emacs on *nix systems! Keep it! It’s awesome! This is just about Windows. And in that context, things like e.g. Emacs assuming it’s cheap to fork a new process–which it is on Linux, but not on Windows–can make a lot of Emacs stuff slow that doesn’t need to be. That said: if you’re using native Git, and not e.g. Cygwin or WSL1 Git, it should perform well out-of-the-box.
To clarify, most of the finicky setup on Windows was related to SSH keys, because Windows doesn’t support SSH well. Eventually I ended up getting it working with Putty, IIRC.
I have the opposite experience with Emacs on Windows. It more or less “just works” for me, and it’s really the only way I can tolerate using Windows for development. Somethings are slower (basically anything that uses a lot fork/exec, like find-grep), but for the most part it’s the same as on Linux and OSX, but a version or two behind :-/
I suspect we just have different expectations as far as Git performance, though. I’m using the latest version (as of a couple months ago) from https://gitforwindows.org/, have “core.fscache” turned on, and other “tricks” I found via StackOverflow (a lot of people think it’s slow on Windows) to speed things up, and it’s still noticeably slower than on Linux - especially for large commits with big diffs.
As a reminder, this article was about Git versus other SCMs. That said:
To clarify, most of the finicky setup on Windows was related to SSH keys, because Windows doesn’t support SSH well
SSH and ssh-agent are built in since at least Windows 10. It’s directly from Microsoft, requires no third-party dependencies, and integrates directly with the Windows crypt store and Windows Services manager.
I suspect we just have different expectations as far as Git performance, though
Git on Windows does perform meaningfully worse than on Linux. Two reasons are generic (sort of) to Windows, one to Git. On the Windows front: the virus scanner (Windows defender) slows things down by a factor of 4- to 10x, so I would disable it on your source directories; and second, NTFS stores the file list directly in the directory files. This hurts any SCM-style edit operation, but it’s particularly bad with Git, which assumes it’s cheap. That last one is the one that’s partially Git-specific.
In the context of this article, though, Git should be performing on par with the other SCMs for local ops. That’s a separate issue from the (legitimate!) issues Git has on Windows.
Eh, that’s fair; Microsoft’s decision to call over half a decade of Windows updates “Windows 10” leads to a lot of confusion. But in this case, the SSH bits I’m talking about were added in 2018—five years ago. That’s before React Hooks were public, or three Ubuntu LTS versions ago, if you want a fencepost.
That’s definitely a bit older than I thought. If I had to answer when it shipped in mainstream builds without looking it up, I’d have said it was a 20H1 feature. At any rate, I wasn’t calling it new so much as saying that “at least Windows 10” reads as “2015” to me.
I have the opposite experience with Emacs on Windows. It more or less “just works” for me, and it’s really the only way I can tolerate using Windows for development. Somethings are slower (basically anything that uses a lot fork/exec, like find-grep), but for the most part it’s the same as on Linux and OSX, but a version or two behind :-/
Same. Emacs on Windows is good. I use it for most of my C# development. If you want to not be a version or two behind, let me point out these unofficial trunk builds: https://github.com/kiennq/emacs-build/releases
Git runs on Windows but it’s not “fine” - it’s painfully slow and a little finicky to setup.
I’d say that setup is not finicky if you use the official installer — these days it sets up the options you absolutely need. It’s still painfully slow, though, even with the recently-ish-improved file watcher. It’s generally OK using from the command line, but it’s not fast enough to make magit pleasant, and it’s slow to have git status in your PowerShell prompt.
Bought a skateboard last week to get back into skateboarding after 10 years.
Local park has a nice bowl.
Excited to be outside and doing something healthy.
i usually post random stuff i’m struggling with into random slacks i’m in in relevant channels but yeah, it doesn’t feel as rewarding as when working on a shared goal/project
The sqlite cinematic universe is so fascinating to me. It’s undeniably very good software engineering, the test suite for sqlite alone is a marvel. But then… that test suite is written in TCL. The sqlite library code is incredibly portable and used on more devices than almost anything. And yet.. the build system is this weird amalgamation thing no one else does. The code management seems fine. But they use this unique home-grown DVCS.
I’m not trying to say all this stuff is wrong. Clearly it’s working great for the sqlite product. But then why doesn’t anyone else do the same thing?
If your project reaches sqlite levels of success, I say just keep doing what you’re doing. How many projects using “normal” build/distribution methods are as portable and ubiquitous as sqlite? When git started, it was a unique home-grown DVCS too. (Inspired by Bitkeeper, sure, but not identical.)
fossil’s initial release is pretty close to git’s (2006)
I think this page has some pretty good reasoning behind their choices: https://fossil-scm.org/home/doc/c44d9e4d/www/fossil-v-git.wiki
I really don’t want to be this grumpy old man, but.. can we just stop comparing this two, completely unrelated languages? Where are all the Zig vs Haskell posts? It makes just as much sense. Go is simply not a low-level language, and it was never meant to occupy the same niches as Rust, that makes deliberate design tradeoffs to be able to target those use cases.
With that said, I am all for managed languages (though I would lie if I said that I like Go - would probably be my last choice for almost anything), and thankfully they can be used for almost every conceivable purpose (even for an OS, see Microsoft’s attempts), the GC is no problem for the vast majority of tasks. But we should definitely be thankful for Rust as it really is a unique language for the less often targeted non-GC niche.
i would read a zig vs haskell post
You should understand the context a little bit: Shuttle is a company focused on deploying backend applications in Rust. Backend applications (server programs) is precisely the main use-case for Go. It absolutely makes sense for them to compare the two in this context.
Java is an even more common choice for backend development, but fair enough.
My gripe is actually the common, hand-in-hand usage of this “rust’n’go” term, which I’m afraid is more often than not depends on a misconception around Go’s low-levelness, as it was sort of marketed as a systems language.
It is a “systems language”… by the original definition where low-levelness was a side-effect of the times and the primary focus was long-term maintainability and suitability for building infrastructural components.
(Which, in the era of network microservices, is exactly what Go was designed for. Java, Go, and Rust are three different expressions of the classical definition of a “systems language”.)
That said, there has been a lot of misconception because it’s commonly repeated that Go was created “to replace C++ as Google uses it”… which basically means “to replace Python and Ruby in contexts where they can’t scale enough, so you used C++ instead”.
Great link, thanks!
BTW Rob Pike is cited in it where he regrets calling Go a systems language, but that’s probably because he used it in the original sense, and the meaning has drifted.
Go was originally designed as a successor to C. The fact that it’s been most successful as a Python replacement was largely accidental. It does (via the unsafe package) support low-level things but, like Rust, tries to make you opt in for the small bits of code that actually need them.
Go had three explicit, publicized design goals from the start; none of them were “be a successor to C”.
They were written relative to the three languages in use at google:
Rob Pike was pretty explicit that it was intended as a C successor. He said this several times in various things that I heard and read while writing the Go Phrasebook. His goal was to move C programmers to a language with a slightly higher level of abstraction. He sold it to Google relative to their current language usage.
In that case, I suppose I must defer to your closer experience!
They really dropped the ball on that expressivity part. It’s as expressive as C, which is.. not a good thing.
The article is about a DB-backed microservice. I’m curious whether you feel rust or go is inappropriate for this domain.
giving the new Queens of the Stone Age album a listen
I think the worst part of the surprise came from the intersection of docker and ufw. I’ve gotten used to using ufw on ubuntu hosts. And the way that docker changed the firewall rules did not trigger any report in the
ufw status verbose
command.Also, I’d have caught this in my own testing before I invited anyone to use my web app. Because I always do check my database and app server from a machine that shouldn’t be able to connect to them. But I’m impressed that the internet found its way in before I got ’round to checking that.
Yeah, you have to explicily set
127.0.0.1:8000:8000
(or some other available ip on the host) in order to limit it.Always love hearing about folks’ idea of unexpected behavior though!
Good for them, but this is like… the exact opposite of what I want. XD
And I’m not expert but considering that margins are low and competition is high at the low-end, this seems like a poor strategic move. Hopefully Google is paying them well for it.
This is like making a ceramic Solo cup. Who wants a high-end Chromebook? Who wants a Chromebook that can be serviced easily? The whole point of Chromebooks is that they are disposable and have no local state.
I imagine there are people who like Chromebooks and do not think computers should be treated as disposable. I don’t know that there are many, but that’s because I don’t get Chromebooks.
I think it’s not about” disposable”, it’s about “replaceable”. I had one of the first Chromebooks and for non-dev work I loved it. I could use all my favourite websites, read all the articles, do most of the stuff I do on a computer when I’m not working. And when coupled with the (then unofficial) SSH extension, i could even do some light work in vim. Not real dev work, but some tests and trying things out and fiddle with the servers.
Think of it as an advanced tablet. It’s not useless, but it’s not usable for everything, at least not as a primary computing device. I almost never had a tablet, but i liked my Chromebook.
And as such, why not aim for high quality product and high end market? People but iPads and surfaces and I never would, and I don’t think ms or apple or their customers are stupid.
That was certainly the original sales pitch, but there’s a concerted effort away from that in the past few years. The new pitch is that Chromebooks allow you to give an always-up-to-date, easy-to-lock-down tool to everyone in your company/school that, when damaged, loses no data. Throw in their ability to run Wayland and Linux, and I’ve seen a couple of full-on web dev companies that give all their employees powerful Chromebooks, rather than MacBooks.
Do I want that? Hell no. But But apparently some people do—and since “some people” seems to overwhelmingly be large school districts and massive companies, there is money to be had there.
I’m not seeing those types of places wanting to buy a trivially hackable laptop, though…
I’m also baffled by this. Who are these people that will pay a premium for full control over their hardware but don’t care at all about controlling their software?
You can have Free software, or Free hardware, but apparently not both at the same time?
I would not classify the Framework laptop as “Free Hardware,” but would say that some people want to be able to lifecycle their machines forever, no matter the OS. I further doubt it would be hard to install Linux on the mainboards on these Frameworks.
they make ceramic solo cups and they’re kinda cool :D
Putting on my tinfoil hat, this was a part of the long-term bet of Google getting Chromebooks into the education system. Now with children growing up and indoctrinated into the Google ecosystem, they’ll find this just a hardware transition lessoning the burden of a software transition too—especially with all their data in the cloud. This is probably why I see a lot of younger folks here just using iPads and aren’t sure how to handle when there isn’t an app for that (sent an AsciiDoc file and there was no way to ‘open’ it).
This is another reason why FxOS was too ahead of its time. Boot2Gecko being in the hands of children would make me feel a lot safer than its Google alternative.
amazingly, this chromebook has enough local storage to actually be usable
This is a nice writeup. I never took the time to capture things this way when I was doing more ops/debugging work. Will definitely keep it as an example for myself and others :D
What a crappy writeup… the amount of time given to fossil does it no justice. I’ve been using it locally for a few months now and hosting multiple repos and I’ve had such a good experience. So easy to run from a jail on my TrueNAS and even add offsite backups to because all the projects sit in the same directory with a single file each. For open source I think you could easily do some sort of “fossil localdev to public git” flow. The focus on windows/WSL is also annoying but I suppose it allows the whole post to be dismissed for folks who use neither. Hopefully the mention of all the different projects sparks folks’ interest. I think it’s fun to tinker with using different VCS tools.
Windows compatibility is really interesting: it’s important for a lot of users but not something a lot of developers have the computers, interest, or even awareness to match. Anything that wants to seriously compete with git would need to run natively on windows without wsl.
But Fossil does (it’s a single drop-in executable via either
winget
orscoop
), and Git not only runs fine on Windows; it’s the default SCM that Microsoft internally uses these days. This would be like evaluating Subversion on Linux by running TortoiseSVN on WINE.Git runs on Windows but it’s not “fine” - it’s painfully slow and a little finicky to setup.
Personally it doesn’t bother me enough to run through WSL, but I’ve heard people suggest it.
It’s slow enough that for big operations I’ll occasionally switch over to the shell and manually run git commands instead of using Magit, because Magit will often run multiple git commands to get all of the information it needs, and it just slows down too much.
Context for this response: I did Windows dev professionally from 2005 to 2014, and as a hobbyist since then. Half the professional time was on an SCM that supported both Mercurial and Git on Windows.
Setup is literally
winget git
(orscoop git
, if you’re weird like me), aaaand you’re done–or at least in the samegit config --global
hell everyone on Linux is in. Performance has been solid for roughly four years if you’re on an up-to-date Git. It’s not up to Linux standards, because Git relies on certain key aspects of Unix file and directory semantics, but there are even official Microsoft tools to handle that (largely through the Windows equivalent of FUSE).Running anything through WSL on files will perform atrociously: WSL1 is slower than native, but mostly okay, but WSL2 is running on p9, so you’re doing loopback network requests for any file ops. I do run Git in WSL2, but only when working on Linux software, where it’s running natively on Btrfs. You’re trying to lick a live propeller if you use WSL2 Git on Windows files.
I have zero experience with magit on Windows because Emacs on Windows is, in my opinion, way too painful to deal with. I love Emacs on *nix systems! Keep it! It’s awesome! This is just about Windows. And in that context, things like e.g. Emacs assuming it’s cheap to fork a new process–which it is on Linux, but not on Windows–can make a lot of Emacs stuff slow that doesn’t need to be. That said: if you’re using native Git, and not e.g. Cygwin or WSL1 Git, it should perform well out-of-the-box.
To clarify, most of the finicky setup on Windows was related to SSH keys, because Windows doesn’t support SSH well. Eventually I ended up getting it working with Putty, IIRC.
I have the opposite experience with Emacs on Windows. It more or less “just works” for me, and it’s really the only way I can tolerate using Windows for development. Somethings are slower (basically anything that uses a lot fork/exec, like find-grep), but for the most part it’s the same as on Linux and OSX, but a version or two behind :-/
I suspect we just have different expectations as far as Git performance, though. I’m using the latest version (as of a couple months ago) from https://gitforwindows.org/, have “core.fscache” turned on, and other “tricks” I found via StackOverflow (a lot of people think it’s slow on Windows) to speed things up, and it’s still noticeably slower than on Linux - especially for large commits with big diffs.
As a reminder, this article was about Git versus other SCMs. That said:
SSH and
ssh-agent
are built in since at least Windows 10. It’s directly from Microsoft, requires no third-party dependencies, and integrates directly with the Windows crypt store and Windows Services manager.Git on Windows does perform meaningfully worse than on Linux. Two reasons are generic (sort of) to Windows, one to Git. On the Windows front: the virus scanner (Windows defender) slows things down by a factor of 4- to 10x, so I would disable it on your source directories; and second, NTFS stores the file list directly in the directory files. This hurts any SCM-style edit operation, but it’s particularly bad with Git, which assumes it’s cheap. That last one is the one that’s partially Git-specific.
In the context of this article, though, Git should be performing on par with the other SCMs for local ops. That’s a separate issue from the (legitimate!) issues Git has on Windows.
That’s a tiny bit misleading. Windows 10 now includes them but it certainly did not include them when it shipped in 2015.
Eh, that’s fair; Microsoft’s decision to call over half a decade of Windows updates “Windows 10” leads to a lot of confusion. But in this case, the SSH bits I’m talking about were added in 2018—five years ago. That’s before React Hooks were public, or three Ubuntu LTS versions ago, if you want a fencepost.
That’s definitely a bit older than I thought. If I had to answer when it shipped in mainstream builds without looking it up, I’d have said it was a 20H1 feature. At any rate, I wasn’t calling it new so much as saying that “at least Windows 10” reads as “2015” to me.
Same. Emacs on Windows is good. I use it for most of my C# development. If you want to not be a version or two behind, let me point out these unofficial trunk builds: https://github.com/kiennq/emacs-build/releases
I’d say that setup is not finicky if you use the official installer — these days it sets up the options you absolutely need. It’s still painfully slow, though, even with the recently-ish-improved file watcher. It’s generally OK using from the command line, but it’s not fast enough to make magit pleasant, and it’s slow to have git status in your PowerShell prompt.
thanks for pointing out the package managers for windows. I saw
brew
is supposed to work as well but have no context other than a cursory search.Her usage of the WSL1 would be curious even in 2021, I just don’t get why one would do that.
TLDR
Makes fossil and Pijul interesting. Makes the fair point that git isn’t the be all and end all, but also currently unsurpassed overall.
Fossil is a ton of fun. Highly recommend playing around with it.
hiding from the wildfire smoke and watching the kiddo. maybe will get a chance to work on a tiny rust project I’ve been playing with :D
Hiking around the Mt. Baker area with family visiting. Hope the temps stay nice :)
I’ve bought a bunch of odroid SBCs in the past on ameridroid.com
Bought a skateboard last week to get back into skateboarding after 10 years. Local park has a nice bowl. Excited to be outside and doing something healthy.
Break a leg!
This seems super helpful as a tool for brainstorming and not designing purely based on experience. Is TLA+ widely used at other companies as well?