Some things about Podman I have not seen written in other comments:
Development is very active and recent versions add a lot of new features and fix bugs. For example, Quadlet (the feature that makes it possible to generate systemd units by writing files which define your resources with a syntax close to systemd itself) got new features in 5.3.0.
This means that you would rather use a Linux distribution that ships with the latest version of Podman - but be prepared that it could break if the version is not pinned and you don’t test upgrades. Fedora ships with up-to-date versions, unlike with Ubuntu, for example.
If you use Podman as root, you won’t need to deal with iptables/netfilter, as published ports are bound to addresses just like any other process, unlike Docker which usually bypass your firewall rules. (Apparently Podman can use iptables under the hood in some situations, though.)
Podman support rootless setups, with user and network namespaces. It means that you can create separate users for your apps and “root” in a container will have the UID/GID of your user on the host and other UID/GID are defined using unique ranges for each host users (in /etc/subuid and /etc/subgid). Though, as users can’t directly bind to ports 80 and 443, you’ll have to find a solution based on reverse proxy, iptables, CAP_NET_BIND_SERVICE or some magical networking thing (as root on the host you can theoretically access all user network namespaces…).
You can use Docker Compose with Podman rootless just fine, but you need this:
I use this setup on my work laptop and I haven’t had any issue. You could even use the Docker cli with this environment variable (I sometimes make the mistake as Docker got installed along with Docker Compose). Podman implements most of the Docker HTTP API for compatibility reasons but it has its own HTTP endpoints for the Podman cli.
The default network created by Podman only supports IPv4, you need to create your own to support also IPv6.
If the interpreted language allows for parallel execution when launching the executable in Tier 1 (may not be the case for JavaScript like other comments suggest, without workarounds), it will introduce a potential denial of service vulnerability with PID exhaustion.
My theory is that ubuntu/debian move too slow for Valve. Previous releases of SteamOS were based on Ubuntu, but since Valve is pushing a lot of changes to the linux graphics stack and they probably doesn’t want to diverge from upstream too much, Valve needs to have their changes trickle back to them quickly, which is something Arch is really good at.
Having used Sid for a few years before switching to Arch, Arch is infinitely more stable. With Sid there’s no expectations of upgrading a random package and dpkg not being irremediably broken or even your computer being still able to boot afterwards (and it happened to me more than a few times), it’s more akin to Arch’s testing repos.
Arch is a distro that leaves a lot of work to the user. But this can make it a good choice for somebody building their own distro, like SteamOS. Same idea with Gentoo and Chrome OS.
Being an archlinux user for about 15 years, I have to say Archlinux is the best base for a distro targeted for consumer market. Everything in Arch is simple, straight-forward and consistent enough.
I switched over my PC from Ubuntu a few years ago when I realized I was out of touch with changes in core system tools and I was always headed to the Arch wiki to debug anything anyways. I’ve been really happy with it. If I have to rebuild the Lobsters VPSs it’s tempting to move to Arch. I’ve had hassles with backups because versions of Ruby and mariadb-dump in Ubuntu LTS are well behind what’s convenient. We’re simply not operating at a scale where that pace is valuable to us.
I ran Arch in an Internet-facing production environment for a while years ago (circa 2013 or so), and I strongly advise against it. I totally get it - in fact, I came to the idea the same way you did (I ran Arch on my laptop and thought the simplicity and the fact that I knew exactly how everything was set up would be valuable in production - and indeed they were).
The problem is security updates. Arch updates sometimes require manual intervention (whether that be because of Pacman shenanigans or because a package upgraded a major version), and there isn’t a good way to tell beforehand what’s a security update and what’s not. Because of that, if your Arch host is internet-facing, you’re signing up to SSH in every few days and upgrade packages and babysit the machine, in perpetuity. Even if you’re on vacation, or tired.
Major package upgrades are also an issue. You have to take them, because partial upgrades are unsupported, but they can be really disruptive. I got a hard lesson in this when I went to apply security updates and all of a sudden unexpectedly had to sit there for 2-3 hours learning/rewriting configs because Arch had upgraded from Apache httpd 2.2 to 2.4.
Thanks for these experience reports (ping sibling replies @sunng and @Exagone313). This sounds like moving to Arch would be a significant maintenance burden in the form of surprise breakages. It’d probably be fine if we ran enough servers to green/blue or have a staging env, but not with our current setup. I guess a better plan would be to take a smaller step from Ubuntu LTS to Ubuntu Interim, which would mean less churn in ansible for mostly-current versions of packages.
Yeah non-LTS Ubuntu was going to be my suggestion. Fedora Server is potentially another option if you want really new stuff, but I haven’t done it myself so I’m not sure what other issues that approach has. (I’m interested in it for reasons that Lobsters wouldn’t be - FreeIPA looks less annoying to set up on it, etc.)
When I say Arch is good for consumer market, I mean it’s not a good idea to run Arch for your server.
With Arch, you must upgrade frequently but that’s not the case for servers. I have my VPS running arch and I update every few months. It runs into a lot of issues if you don’t update frequently.
In addition to what strugee has written, I had some issues with running Arch Linux servers (though I still do for some specific use cases). Note that I don’t do this in professional production, so I don’t run tests before running upgrades on those servers.
A few years ago, a NodeJS upgrade broke all features relying on OpenSSL. I had to downgrade the package. Package upgrades are not always tested.
When running software wrote by third parties, I often ends up having incompatible (too recent) versions of software, such as for PostgreSQL.
It’s possible to run postgresql-old-upgrade (after editing the systemd unit) but it can break, as it’s only packaged for running pg_upgrade.
I used rbenv, for Ruby, and nvm, for NodeJS, when I needed specific versions of those tools, but it requires self-building and tracking upgrades yourself. (You have packages for NodeJS LTS versions now, but sometimes it’s not enough.)
Running containers (with Docker or Podman) can fix the issue, but making sure that images are maintained properly (or making your own) and upgrading the containers can be complex.
I often skip kernel upgrades (pacman -Syu --ignore linux-lts) because it would require a reboot about every week, even with linux-lts (upgrading removes kernel module files, those could be needed).
I don’t expose my Arch Linux servers directly on the internet anymore. My internet-facing servers are running Debian or Ubuntu with unattended-upgrades. I still have to reboot for kernel upgrades, but it happens way less often than for Arch Linux.
I often skip kernel upgrades (pacman -Syu –ignore linux-lts) because it would require a reboot about every week, even with linux-lts (upgrading removes kernel module files, those could be needed).
You can use the linux-keep-modules package to save the running kernel’s modules and remove them after you reboot.
I migrated from Arch to NixOS relatively recently.
Arch is still better for some usecases. For example, the Arch Build System is really simple compared to patching packages using Nix tooling, which is not so well documented and forces you to fight upstream as it deliberately diverges from FHS.
But NixOS is surprisingly easy to install and maintain. I think it’s the easiest and most robust distro ever in that regard. In particular, you can make dramatic changes with no fear. NixOS also shines if you want to keep lots of services running, as you have some centralized options, compared to the need to maintain conf files.
Many batteries-included distros have two disadvantages that come to my mind:
They are heavy - coming with lots, and lots of stuff pre-installed by default, and not really needed in specialized distros like SteamOS. I cannot imagine Steam requiring LibreOffice, for instance, so they could either go with a lightweight base like Arch or Alpine, or take Ubuntu and strip it down to the bones - building on Arch is likely much easier
They often have lengthy release cadence - maybe with the exception of Fedora. Arch Linux follows “move fast and break things” philosophy. I left Ubuntu / Linux Mint exactly because I couldn’t stand having to work around bluetooth driver problems for half a year before they would finally support kernel versions that got rid of the bluetooth problems.
It is based on Rustls which uses an OpenSSL fork under the hood (aws-lc). Until Rustls fully replaces OpenSSL by a Rust solution, I don’t see the point of using pyrtls.
OpenSSL is two things, libcrypto for the primitives and libssl for TLS. (I can’t remember where the x.509 nonsense lives.) There’s far too much overcomplicated indirection in the OpenSSL APIs, and it isn’t getting better: in OpenSSL 3 they changed the hashing API which introduced a significant performance regression for things like SHA-256.
aws-lc is “a secure libcrypto that is compatible with software and applications used at AWS” so it still has a lot of the OpenSSL nastiness.
tonedeaf post, elastic obviously didn’t anticipate Amazon’s fork - this represents the back-peddling of a bad decision. idk why companies can’t just own up to that.
This is not back-peddling, as the original license is Apache 2.0, which OpenSearch still uses.
Back-peddling would be to put it back under Apache 2.0, so that both projects can merge, though I’d understand if they keep some corporate-specific tools around it AGPL (or optionally a proprietary license).
Anyway they will keep a CLA so they can change licenses at any point to the future, so I would never use their software. OpenSearch doesn’t have a CLA, only a DCO.
I wonder how many web crawlers are getting stuck recursively on ubuntu right now.
EDIT: oh, it stops at /ubuntu/ubuntu/ubuntu/ubuntu/ubuntu/ubuntu/ubuntu/ubuntu/ubuntu/ubuntu/ubuntu/ubuntu/ubuntu/ubuntu/ubuntu/ubuntu/ubuntu/ubuntu/ubuntu/ubuntu/ubuntu/ubuntu/ubuntu/ubuntu/ubuntu/ubuntu/ubuntu/ubuntu/ubuntu/ubuntu/ubuntu/ubuntu/ubuntu/ubuntu/ubuntu/ubuntu/ubuntu/ubuntu/ubuntu/ubuntu and after it’s forbidden
I can’t recall the site, but there was an obscure complaint on… I think it was the cloudflare forum… where this guy had infinite subdomains and some crawler was visiting them all. I think it was probably actually designed as a crawler honeypot, and the frustration was feigned. I’ll try to find it.
I’m curious which of these Yuzu forks gets anyone technical on them. So far what I’ve initially seen is just rebranding and websites, but not much in the way of other acticity.
Copyright relies on three things (also this goes for all laws):
You are able to somehow find out that someone is breaking the copyright license
You hold sufficient institutional or financial power to be able to pursue them in a court of law
You are able to doxx this person enough to be able to hold them to legal justice
So, how many times are all three of these points true? Pragmatically, we can see that this issue is not actually an issue to the owner of the repository, because Copyright only matters in context of a lawsuit and legal action. What are the original authors going to do, sue them?
Fundamentally, “laws” and “justice” only matter if the victim has the ability to see them enforced. License term enforcement is fundamentally broken because the people who can afford to enforce it are often not the creators. The system is, and always has been set up, such that people with money or institutional power can afford to ignore laws, while everyone else bends the knee.
Pragmatically, we can see that this issue is not actually an issue to the owner of the repository, because Copyright only matters in context of a lawsuit and legal action. What are the original authors going to do, sue them?
the parent comment was saying that it discredits the owners, which it does, regardless of the pragmatic implications. none of this justifies or explains why they would swap out the LICENSE file. it’s an indication of incompetence.
I don’t think anything I’ve said disagrees with anything that you have said. In this case the only thing that held the author to the copyright was social accountability.
The fact that a social side-channel was what held the author to account does not disagree with my claim that copyright is functionally worthless in the majority of cases. In this case, the author made a decisions to assent to social pressure, which is a side-channel to the intended function of copyright.
right but I wasn’t talking about your claim that copyright is functionally worthless, rather your claim that “this issue is not actually an issue to the owner of the repository.” this appears in contradiction with my comment and the parent comment, saying that it actually does discredit the owners.
To me it sounds like the very reason to use and prefer GPL whenever possible, to subvert this system. It’s a weird argument to make in defense of someone trying to ignore GPL.
Assuming that a release is final is a though choice to make.
I used HexChat for many years, because that’s the one I favoured the interface (first on Windows then on Linux).
It has some issues (for example, weird handling of copy/paste) but it offers some good features (random colors for users, themes, command shortcuts…). I pair it with the ZNC bouncer.
I don’t know how it would compare with more recent clients/bouncers, such as emersion’s suite of IRC tools (soju + gamja/goguma). I think that console clients are impossible to use (irssi/weechat) - unless that’s all I have.
Nowadays Android (mobile) is my main platform for instant messaging and apparently goguma is suited for the platform.
I’m surprised to see these put together. WeeChat has about as much mouse support as any true GUI IRC client I’ve tried — one can even drag-and-swipe on a nick in the nick list to /kick someone (which personally I find rather a silly and hazardous feature).
If it provides packages that take a lot of time to build (rust, llvm) it could really be interesting. I tried Gentoo for some time but having to rebuild packages taking 20-30 minutes for each update was though.
FreeBSD now has a nice way of doing this kind of hybrid. If you build packages with Poudriere (the recommended way) then you can tell it to fetch any where you locally have the same revision and options as the official repo. This lets you build the ones that you want to do custom things with, but not rebuild ones where you will just get the same as the official repo.
For Make to be able to rebuild your program when changing headers, but without defining each dependency to header files, you can add -MMD -MP to your CPPFLAGS (CFLAGS is also added to the linking arguments, unlike with CPPFLAGS).
The C compiler will then create partial Makefiles that can be included.
What about server-side rendering/SSR? Some components can produce HTML and CSS and provide no interaction, at least as a fallback. You still need technology that will inline the component into raw HTML.
I said this in my other comment, but the way it typically works in the news industry is you make a micro-site with a traditional framework (Svelte was literally created to do this), upload it to an S3 bucket, and then inline it on an article page with an iframe and some JS to resize the iframe to be the right height. The thing that makes this sustainable is once publication is over, you essentially never touch the project again. Maybe one or two tweaks in the post-launch window, but nothing on going.
A different approach is some variation on “shortcodes”. There you have content saved in the database along with markers (“shortcodes”) that the CMS knows how to inflate into a component of some sort. Let’s say you have [[related-articles]] to put in an inline list of related articles or <sidebar story="url"> for a sidebar link to a story or {{ adbreak }} or <gallery id="123"> or whatever. The advantage of these is that they aren’t one-and-done. You can change the layout for your related articles and sidebars and change ad vendors and whatnot. But the downside is they tend to be pretty much tied into your CMS, and you basically can’t reuse them if you ever want to go from one CMS to another. Typically the best migration path is to just strip them out when you move to the new CMS, even if it does make some of the old articles look funky and weird.
Yeah, I considered and rejected shortcodes (or rather, Astro components in MDX files, which is the Astro equivalent) for exactly this reason. Very convenient in the short term, much more painful when migrating.
XML is just not used for the same thing and are not comparable.
First thing first, not throwing an error when the type of an attribute is wrong when parsing a YAML schema is a mistake. A version number must be a string, not a number, so version: 1.20 shouldn’t be allowed. For example, if I remember well, Kubernetes is strict on the data types when parsing YAML, and it works well. If <version>1.20</version> works as you would expect, it’s only because everything is a string in XML (you can use a schema to validate a string, but it is still a string).
About XML, it is a language used for documents. Tags and attributes are used to provide more data around the text contained in the document. You can read a good article on XML here: XML is almost always misused (2019, Lobsters).
YAML, on the other end, is a language for storing data structures. It has its own limitations, but also some features that are not widely known, such as anchors.
I would keep most of things as-is, I don’t see the point of a full renaming, it just needs some clarity added on homepage and documentation.
You got the Oil Shell project that regroups everything (I think Oils for UNIX is distancing from what the project is for in the first place).
${PREFIX}/bin/oil is the Oil Shell main binary, which runs the Oil Shell with the Oil Language.
${PREFIX}/bin/osh is the compatibility binary, which runs the Oil Shell with the Oil Compatibility Language which is a POSIX implementation with some additions (like aiming to be compatible with Bash).
I would stop using the uppercase name OSH since it is not clear of what it is referring to. OSH the language would effectively be renamed to “Oil Compatibility Language”. Referring to the binary would need to use the lowercase form osh, and preferably to explicitly refer to it as “osh binary”. Since “Oil Compatibility Language” is too long to write, you could also use “osh language” as a shortcut, as long as you keep being explicit as whether you are referring to the language or the binary.
As to why naming the Oil Compatibility Language’s binary “osh”, it is because it is closer to the name of commonly used shells, which all use the “sh” suffix. While “oil” is a new thing and doesn’t even use this archaic suffix.
PS: If the Oil Shell project starts to aim for the reimplementation of a larger part of what a POSIX operating system is, using the name Oils for UNIX would start being more relevant.
Also bin/oil and bin/osh are symlinks – we would still need a name for the binary they point to. (oils-for-unix is my suggestion. In the old Python tarball it’s actually oil.ovm, though nobody really knows that.)
Scribble.io has ads? I have been too many years behind ublock…
Actually looks like there are no more ads! Never knew.
It still has ads.
Uh … not sure why I had none then :D
I already stomped on this but I didn’t know it was developed in Go despite the Rusty name.
haha, yeah, I assume many tech savy people might think it is Rust.
I initially found the domain
scribble.rs
and thought it was fitting, so that’s what I named the project.Sadly I failed to pay for the domain and some scalper got it. I don’t see myself paying hundreds to get it back though.
Some things about Podman I have not seen written in other comments:
If the interpreted language allows for parallel execution when launching the executable in Tier 1 (may not be the case for JavaScript like other comments suggest, without workarounds), it will introduce a potential denial of service vulnerability with PID exhaustion.
I was suspicious about the AGPL to MIT relicense but prior contributions are probably not legally significant changes (IANAL).
I am truly baffled why Valve uses Arch Linux, but any investment into open source I support!
My theory is that ubuntu/debian move too slow for Valve. Previous releases of SteamOS were based on Ubuntu, but since Valve is pushing a lot of changes to the linux graphics stack and they probably doesn’t want to diverge from upstream too much, Valve needs to have their changes trickle back to them quickly, which is something Arch is really good at.
No need to theorize. That is the reason.
Debian Sid would’ve also fit to those requirements. But cool nonetheless!
Having used Sid for a few years before switching to Arch, Arch is infinitely more stable. With Sid there’s no expectations of upgrading a random package and dpkg not being irremediably broken or even your computer being still able to boot afterwards (and it happened to me more than a few times), it’s more akin to Arch’s testing repos.
I would theorize that Sid is not as widely used compared to default Arch. They’re essentially getting more test users for free as a result.
Not during the lead-up to a Debian release, and not during a binary transition.
Don’t forget the package formats :)
Arch is a distro that leaves a lot of work to the user. But this can make it a good choice for somebody building their own distro, like SteamOS. Same idea with Gentoo and Chrome OS.
Being an archlinux user for about 15 years, I have to say Archlinux is the best base for a distro targeted for consumer market. Everything in Arch is simple, straight-forward and consistent enough.
I switched over my PC from Ubuntu a few years ago when I realized I was out of touch with changes in core system tools and I was always headed to the Arch wiki to debug anything anyways. I’ve been really happy with it. If I have to rebuild the Lobsters VPSs it’s tempting to move to Arch. I’ve had hassles with backups because versions of Ruby and mariadb-dump in Ubuntu LTS are well behind what’s convenient. We’re simply not operating at a scale where that pace is valuable to us.
I ran Arch in an Internet-facing production environment for a while years ago (circa 2013 or so), and I strongly advise against it. I totally get it - in fact, I came to the idea the same way you did (I ran Arch on my laptop and thought the simplicity and the fact that I knew exactly how everything was set up would be valuable in production - and indeed they were).
The problem is security updates. Arch updates sometimes require manual intervention (whether that be because of Pacman shenanigans or because a package upgraded a major version), and there isn’t a good way to tell beforehand what’s a security update and what’s not. Because of that, if your Arch host is internet-facing, you’re signing up to SSH in every few days and upgrade packages and babysit the machine, in perpetuity. Even if you’re on vacation, or tired.
Major package upgrades are also an issue. You have to take them, because partial upgrades are unsupported, but they can be really disruptive. I got a hard lesson in this when I went to apply security updates and all of a sudden unexpectedly had to sit there for 2-3 hours learning/rewriting configs because Arch had upgraded from Apache httpd 2.2 to 2.4.
Thanks for these experience reports (ping sibling replies @sunng and @Exagone313). This sounds like moving to Arch would be a significant maintenance burden in the form of surprise breakages. It’d probably be fine if we ran enough servers to green/blue or have a staging env, but not with our current setup. I guess a better plan would be to take a smaller step from Ubuntu LTS to Ubuntu Interim, which would mean less churn in ansible for mostly-current versions of packages.
Yeah non-LTS Ubuntu was going to be my suggestion. Fedora Server is potentially another option if you want really new stuff, but I haven’t done it myself so I’m not sure what other issues that approach has. (I’m interested in it for reasons that Lobsters wouldn’t be - FreeIPA looks less annoying to set up on it, etc.)
When I say Arch is good for consumer market, I mean it’s not a good idea to run Arch for your server.
With Arch, you must upgrade frequently but that’s not the case for servers. I have my VPS running arch and I update every few months. It runs into a lot of issues if you don’t update frequently.
In addition to what strugee has written, I had some issues with running Arch Linux servers (though I still do for some specific use cases). Note that I don’t do this in professional production, so I don’t run tests before running upgrades on those servers.
postgresql-old-upgrade
(after editing the systemd unit) but it can break, as it’s only packaged for runningpg_upgrade
.pacman -Syu --ignore linux-lts
) because it would require a reboot about every week, even with linux-lts (upgrading removes kernel module files, those could be needed).unattended-upgrades
. I still have to reboot for kernel upgrades, but it happens way less often than for Arch Linux.You can use the linux-keep-modules package to save the running kernel’s modules and remove them after you reboot.
I’ll provide 24/7 support if you do :)
EDIT: And despite comments, I’d like to point out that all Arch infra runs on Arch. All transparently managed with ansible. https://gitlab.archlinux.org/archlinux/infrastructure
I migrated from Arch to NixOS relatively recently.
Arch is still better for some usecases. For example, the Arch Build System is really simple compared to patching packages using Nix tooling, which is not so well documented and forces you to fight upstream as it deliberately diverges from FHS.
But NixOS is surprisingly easy to install and maintain. I think it’s the easiest and most robust distro ever in that regard. In particular, you can make dramatic changes with no fear. NixOS also shines if you want to keep lots of services running, as you have some centralized options, compared to the need to maintain conf files.
The steam deck is arch based afaik. I guess the devs were familiar with it.
I was equally surprised when I learned that ChromeOS is based on gentoo.
ChromeOS isn’t based on Gentoo. That’s a common misconception
ChromeOS uses Gentoo’s package manager Portage as part of its (extremely convoluted) build system
Would it be fair to say that Chrome OS started as a Gentoo distro? And then with time it became its own fullfledged distribution?
I’d say it started as a Chromium build and got enough added on to be an OS
ChromeOS was never based on Gentoo. Early builds I believe were actually Ubuntu based
https://www.chromium.org/chromium-os/packages/
They are calling gentoo the upstream. That tells me that it is the upstream
Many batteries-included distros have two disadvantages that come to my mind:
I moved to using
uuid
instead. And if I want to sort rows by creation date I add a column containing this date (astimestamptz
).I don’t have to deal with collisions on my scale though, but if I had I wouldn’t use globally incremented sequences but rather snowflake IDs.
It is based on Rustls which uses an OpenSSL fork under the hood (aws-lc). Until Rustls fully replaces OpenSSL by a Rust solution, I don’t see the point of using pyrtls.
OpenSSL is two things,
libcrypto
for the primitives andlibssl
for TLS. (I can’t remember where the x.509 nonsense lives.) There’s far too much overcomplicated indirection in the OpenSSL APIs, and it isn’t getting better: in OpenSSL 3 they changed the hashing API which introduced a significant performance regression for things like SHA-256.aws-lc is “a secure libcrypto that is compatible with software and applications used at AWS” so it still has a lot of the OpenSSL nastiness.
You can build it today with experimental pure Rust crypto providers: https://docs.rs/rustls/latest/rustls/#third-party-providers
Be cautious that
exec
can fail in the shell script for various reasons, so it is better to addset -e
or to exit with an error message afterexec
.tonedeaf post, elastic obviously didn’t anticipate Amazon’s fork - this represents the back-peddling of a bad decision. idk why companies can’t just own up to that.
This is not back-peddling, as the original license is Apache 2.0, which OpenSearch still uses.
Back-peddling would be to put it back under Apache 2.0, so that both projects can merge, though I’d understand if they keep some corporate-specific tools around it AGPL (or optionally a proprietary license).
Anyway they will keep a CLA so they can change licenses at any point to the future, so I would never use their software. OpenSearch doesn’t have a CLA, only a DCO.
http://w.h.a.t.e.v.e.r.archive.ubuntu.com/ubuntu/
I wonder how many web crawlers are getting stuck recursively on
ubuntu
right now.EDIT: oh, it stops at
/ubuntu/ubuntu/ubuntu/ubuntu/ubuntu/ubuntu/ubuntu/ubuntu/ubuntu/ubuntu/ubuntu/ubuntu/ubuntu/ubuntu/ubuntu/ubuntu/ubuntu/ubuntu/ubuntu/ubuntu/ubuntu/ubuntu/ubuntu/ubuntu/ubuntu/ubuntu/ubuntu/ubuntu/ubuntu/ubuntu/ubuntu/ubuntu/ubuntu/ubuntu/ubuntu/ubuntu/ubuntu/ubuntu/ubuntu/ubuntu
and after it’s forbiddenI can’t recall the site, but there was an obscure complaint on… I think it was the cloudflare forum… where this guy had infinite subdomains and some crawler was visiting them all. I think it was probably actually designed as a crawler honeypot, and the frustration was feigned. I’ll try to find it.
Are you thinking of https://www.web.sp.am/? GPTBot was causing problems there a few months ago.
Yes, that’s it! Incredible memory you have
It was fairly normal to see this on Linux mirrors back in the day, though.
I’m curious which of these Yuzu forks gets anyone technical on them. So far what I’ve initially seen is just rebranding and websites, but not much in the way of other acticity.
I don’t know how much faith I have in Suyu in particular.
The first issue they got on their repository was someone correctly pointing out that you can’t relicense from GPL to MIT by just swapping the license file.
Copyright relies on three things (also this goes for all laws):
You are able to somehow find out that someone is breaking the copyright license
You hold sufficient institutional or financial power to be able to pursue them in a court of law
You are able to doxx this person enough to be able to hold them to legal justice
So, how many times are all three of these points true? Pragmatically, we can see that this issue is not actually an issue to the owner of the repository, because Copyright only matters in context of a lawsuit and legal action. What are the original authors going to do, sue them?
Fundamentally, “laws” and “justice” only matter if the victim has the ability to see them enforced. License term enforcement is fundamentally broken because the people who can afford to enforce it are often not the creators. The system is, and always has been set up, such that people with money or institutional power can afford to ignore laws, while everyone else bends the knee.
the parent comment was saying that it discredits the owners, which it does, regardless of the pragmatic implications. none of this justifies or explains why they would swap out the LICENSE file. it’s an indication of incompetence.
I don’t think anything I’ve said disagrees with anything that you have said. In this case the only thing that held the author to the copyright was social accountability.
I think “this issue is not actually an issue to the owner of the repository” disagrees with “it discredits the owners.” No?
The fact that a social side-channel was what held the author to account does not disagree with my claim that copyright is functionally worthless in the majority of cases. In this case, the author made a decisions to assent to social pressure, which is a side-channel to the intended function of copyright.
right but I wasn’t talking about your claim that copyright is functionally worthless, rather your claim that “this issue is not actually an issue to the owner of the repository.” this appears in contradiction with my comment and the parent comment, saying that it actually does discredit the owners.
In this case the maintainer switched back from MIT to GPL so the issue is moot.
I mean this specific issue? Sure.
Programmers not understanding how Copyright works? That’s probably an eternal problem.
Considering the stated goal of the project, playing fast and loose with copyright is on brand!
But they didn’t rewrite the history, which is regrettable. They relicensed from GPLv2+ to GPLv3 or GPLv3+, it’s not clear.
To me it sounds like the very reason to use and prefer GPL whenever possible, to subvert this system. It’s a weird argument to make in defense of someone trying to ignore GPL.
which other forks have you seen? I haven’t really been paying attention, so this is the first one I’ve seen
There’s also Nuzu.
And Luzu!
Assuming that a release is final is a though choice to make.
I used HexChat for many years, because that’s the one I favoured the interface (first on Windows then on Linux).
It has some issues (for example, weird handling of copy/paste) but it offers some good features (random colors for users, themes, command shortcuts…). I pair it with the ZNC bouncer.
I don’t know how it would compare with more recent clients/bouncers, such as emersion’s suite of IRC tools (soju + gamja/goguma). I think that console clients are impossible to use (irssi/weechat) - unless that’s all I have.
Nowadays Android (mobile) is my main platform for instant messaging and apparently goguma is suited for the platform.
I’m surprised to see these put together. WeeChat has about as much mouse support as any true GUI IRC client I’ve tried — one can even drag-and-swipe on a nick in the nick list to /kick someone (which personally I find rather a silly and hazardous feature).
The name is misleading. This is what httpd 2 is: https://httpd.apache.org/
If it provides packages that take a lot of time to build (rust, llvm) it could really be interesting. I tried Gentoo for some time but having to rebuild packages taking 20-30 minutes for each update was though.
FreeBSD now has a nice way of doing this kind of hybrid. If you build packages with Poudriere (the recommended way) then you can tell it to fetch any where you locally have the same revision and options as the official repo. This lets you build the ones that you want to do custom things with, but not rebuild ones where you will just get the same as the official repo.
In case anyone else is curious about this, here is the PR that adds this feature.
For Make to be able to rebuild your program when changing headers, but without defining each dependency to header files, you can add
-MMD -MP
to yourCPPFLAGS
(CFLAGS
is also added to the linking arguments, unlike withCPPFLAGS
).The C compiler will then create partial Makefiles that can be included.
Makefile
:foo.c
:utils.c
:utils.h
:Build:
Generated
foo.d
:Generated
utils.d
:What about server-side rendering/SSR? Some components can produce HTML and CSS and provide no interaction, at least as a fallback. You still need technology that will inline the component into raw HTML.
I said this in my other comment, but the way it typically works in the news industry is you make a micro-site with a traditional framework (Svelte was literally created to do this), upload it to an S3 bucket, and then inline it on an article page with an iframe and some JS to resize the iframe to be the right height. The thing that makes this sustainable is once publication is over, you essentially never touch the project again. Maybe one or two tweaks in the post-launch window, but nothing on going.
A different approach is some variation on “shortcodes”. There you have content saved in the database along with markers (“shortcodes”) that the CMS knows how to inflate into a component of some sort. Let’s say you have
[[related-articles]]
to put in an inline list of related articles or<sidebar story="url">
for a sidebar link to a story or{{ adbreak }}
or<gallery id="123">
or whatever. The advantage of these is that they aren’t one-and-done. You can change the layout for your related articles and sidebars and change ad vendors and whatnot. But the downside is they tend to be pretty much tied into your CMS, and you basically can’t reuse them if you ever want to go from one CMS to another. Typically the best migration path is to just strip them out when you move to the new CMS, even if it does make some of the old articles look funky and weird.Yeah, I considered and rejected shortcodes (or rather, Astro components in MDX files, which is the Astro equivalent) for exactly this reason. Very convenient in the short term, much more painful when migrating.
XML is just not used for the same thing and are not comparable.
First thing first, not throwing an error when the type of an attribute is wrong when parsing a YAML schema is a mistake. A version number must be a string, not a number, so
version: 1.20
shouldn’t be allowed. For example, if I remember well, Kubernetes is strict on the data types when parsing YAML, and it works well. If<version>1.20</version>
works as you would expect, it’s only because everything is a string in XML (you can use a schema to validate a string, but it is still a string).About XML, it is a language used for documents. Tags and attributes are used to provide more data around the text contained in the document. You can read a good article on XML here: XML is almost always misused (2019, Lobsters).
YAML, on the other end, is a language for storing data structures. It has its own limitations, but also some features that are not widely known, such as anchors.
I would keep most of things as-is, I don’t see the point of a full renaming, it just needs some clarity added on homepage and documentation.
${PREFIX}/bin/oil
is the Oil Shell main binary, which runs the Oil Shell with the Oil Language.${PREFIX}/bin/osh
is the compatibility binary, which runs the Oil Shell with the Oil Compatibility Language which is a POSIX implementation with some additions (like aiming to be compatible with Bash).I would stop using the uppercase name OSH since it is not clear of what it is referring to. OSH the language would effectively be renamed to “Oil Compatibility Language”. Referring to the binary would need to use the lowercase form osh, and preferably to explicitly refer to it as “osh binary”. Since “Oil Compatibility Language” is too long to write, you could also use “osh language” as a shortcut, as long as you keep being explicit as whether you are referring to the language or the binary.
As to why naming the Oil Compatibility Language’s binary “osh”, it is because it is closer to the name of commonly used shells, which all use the “sh” suffix. While “oil” is a new thing and doesn’t even use this archaic suffix.
PS: If the Oil Shell project starts to aim for the reimplementation of a larger part of what a POSIX operating system is, using the name
Oils for UNIX
would start being more relevant.I want to get away from the “compatibility” framing with regard to shell, see this comment:
https://lobste.rs/s/plmk9r/new_names_for_oil_project_oil_shell#c_tolgcm
Also
bin/oil
andbin/osh
are symlinks – we would still need a name for the binary they point to. (oils-for-unix
is my suggestion. In the old Python tarball it’s actuallyoil.ovm
, though nobody really knows that.)I thought this was going to talk about the different ways to send HTML or plaintext emails.