I’m in the “knows enough to be dangerous” camp of programming: it’s not my day job, but I do enough of it as a hobby to get an intuitive sense for what’s good and what’s bad. Interesting that I managed to reach largely the same conclusions on these topics as someone who has been doing it far more professionally than me!
The same opportunity exists regardless of what Oracle does. The problem is that there isn’t a better name than JavaScript, because that is what everyone has called it for the last quarter century.
I feel like there is a FAR better chance of stopping the use of “SSL” in favor of “TLS.” We’ve had TLS for 25 years and all versions of SSL have been formally deprecated for 10. Other than sheer habit, there is no reason for people to keep saying SSL when they mean TLS, other than in the names of some software and libraries that pre-dated TLS.
And yet, it likely will never happen. When we look at JavaScript, the situation is thornier by an order of magnitude or more. It is a MUCH deeper and entrenched name. As any middle-schooler will attest to, names tend to stick, even when we don’t want them to.
This is kind of beside the point, but I do not understand why the IETF went through a phase of renaming protocols that they adopted, eg, SSL -> TLS, Jabber -> XMPP, BEEP -> BXXP (probably others I forget). At least they seem to have renamed things less in recent years.
My son’s school has a greenhouse and he wanted to be able to monitor the outside and inside temperatures. So we are currently working on an ESP8266 with two themometers connected which will talk to a Rasberry Pi running Home Assistant OS. I remember dabbling with this stuff years back but thanks to the hard-working hackers behind this stuff, now everything is just dead simple and it all Just Works. If I didn’t have a thousand other projects in the queue already, I would love to go whole-hog on home automation. (It’s something I’ve been dreaming about for decades.)
Honestly, I recommend not going whole-hog and instead dipping your toe in.
It’s the perfect hobby to have on the side over the long term and only dabble with here and there as time permits. My journey has basically been:
Get the Docker version of HomeAssistant running
Connect whatever devices get auto-detected / build a crappy dashboard
Install some community integrations for devices that didn’t work out of the box
Switch over to HAOS
Setup matter integration / buy first matter device
Create crappy voice assistant
???
It’s taken me about a year and a half to get to step 7. I probably average ~15-30 minutes / week chipping away at it. As a parent with almost no free time, it’s been a great little hobby!
Does anyone have any insight into this, & what latest is going on at VMWare after the acquisition? Is this a good thing for Fusion/Workstation users, or just the first step on a long road of gradual decay toward an eventual unsupported dusty death?
My suspicion is that Broadcom is winding down these products but have to maintain them for at least a few years yet due to existing support contracts. Someone in middle management thought it would improve the company’s image a little bit to release these for “free” while they are still supported.
(The company I work for has a lot of vSphere, which was already eye-wateringly expensive before VMware was purchased by private equity. Earlier this year when it came time to renew the support contracts, they literally tripled the price. Our company said, “no thanks,” and we are now running thousands of vSphere hosts and a bunch of vCenters with zero support while whole teams scramble to transition our services to a mix of OpenStack and Kubernetes.
I’m not a heavy user of either product but Broadcom previously added some kind of non-commercial license for both that was useful for me for playing with retro operating systems and checking/improving various FreeBSD emulated device drivers. From the casual perspective it seems like they are still working on both and it doesn’t seem like either were the main focus of VMWare before the acquisition so no perceptible change in terms of quality (which is merely acceptable).
It’s a little crazy this didn’t happen a long time ago, under VMWare, to try and keep some level of relevance to the underlying hypervisor and device model. People seem to think Broadcom is the only greedy company but VMWare was always a very greedy company.
I really don’t like the terminology “soft link” and “hard link” because it obfuscates what is going on and that ends up becoming a source of confusion, as the author demonstrated by the need to write the article.
A “symbolic link” is a file that is just a reference to a different filename. It’s no more complicated than that. A “hard link” doesn’t actually exist as a thing, it’s just what we happen to call the situation where two different filenames point to the same inode. Once you know these two things, it’s easy to reason about when you can (or should) use one or the other.
For extra credit, learn about reflinks which are similar to “hard links” except that modifying one of the filenames creates a new copy of the file instead of modifying the data referenced by the inode. (Sadly they are still not well supported by many applications that would benefit from their use, such as rsync.)
The 7th Edition manual (which was before symbolic links were a thing) says “A link is a directory entry referring to a file; the same file […] may have several links to it.”
When symbolic links were introduced, the man pages for ln(1) and link(2) were changed to describe non-symbolic links as “hard links”. The “hard” term sort of makes sense because the link can’t be broken without being removed entirely, unlike a symbolic link.
It’s reasonable to dislike the terminology, but “hard” is the standard terminology and I don’t think there are any alternatives.
I tried Alpine out on some of my servers, and still run it on some of them, but the lack of unattended upgrades limits where I use it. No unattended upgrades is fine if you have other ways of automatically handling security updates, but at home I’m not always going to manually apply them.
I’m not super familiar with Alpine, but wouldn’t unattended upgrades simply be a matter of a cron job that runs a shell script that checks for package updates, installs them, and then reboots the host?
Sure but when you’re new to an OS there are details that are hard to get right there. How do you disable input prompts for the packager? How do you only reboot for security updates? Can you just schedule the reboot instead of doing it automatically? What happens if you have an error in the script and it’s not actually updating ever?
There’s a reason it’s built-in to other operating systems.
I use APK upgrade minus minus available. There’s all kinds of automatic ways to do it. I just have a cron job that reboots the servers and does it every night.
I have a few questions for those who have been experimenting with self-hosting their own LLMs.
To set the context (hurr): I am someone who uses LLMs a few times a day. I bounce around between chatgpt.com and ddg.co/chat depending on my mood. I generally use an LLM as a substitute for Google (et al) because web search engines have become borderline useless over the last decade or so due to the natural incentives of an ad-based business model. I find that the LLMs are correct often enough to offset the amount of time I spend chasing a non-existent made-up rabbit hole. I treat them like Wikipedia: good as a starting point, but fatal as a primary source.
But I still don’t know much about a lot of the concept and terms used in the article. I know that the bigger a model is, the “better” it is. But I don’t know what’s actually inside a model. I only sort of get the concept of context but have no idea what quantization means outside of the common definition. This is not meant as a critique of the article, just to state my level of knowledge with regard to AI technology. (Very little!)
That said, hypothetically let’s say that the most powerful machine I have on-hand is a four year-old laptop with 6 CPU cores (12 hyperthreads) and 64 GB of RAM and no discrete GPU. It already runs Linux. Is there a way I can just download and run one of these self-hosted LLMs on-demand via docker or inside a VM? If so, which one and where do I get it? And would it be a reasonable substitute for any of the free LLMs that I currently use in a private window without a login? Will it work okay to generate boilerplate or template code for programming/HTML/YAML, or do you need a different model for those?
I have heard that running an LLM on a CPU means the answers take longer to write themselves out. Which is okay, up to a point… waiting up to about a minute or two for a likely correct and useful answer would be workable but anything longer than that would be useless as I will just get impatient and jump to ddg.co/chat.
One way to think of a model is that it’s effectively a big pile of huge floating point matrices (“layers”), and when you run a prompt your are running a HUGE set of matrices multiplication operations - that’s why GPUs are useful, they’re really fast at running that kind of thing in parallel.
A simplified way to think about quantization is that it’s about dropping the number of decimals in those floating point numbers - it turns out you can still get useful results even if you drop their size quite a bit.
I suggest trying out a model using a llamafile - it’s a deviously clever trick where you download a multi-GB binary file and treat it as an executable - it bundles the model and the software needed to run it (as a web server) and, weirdly, that same binary can run on Windows and Mac and Linux.
Is there a way I can just download and run one of these self-hosted LLMs on-demand via docker or inside a VM? If so, which one and where do I get it? And would it be a reasonable substitute for any of the free LLMs that I currently use in a private window without a login?
I’ve used a few projects to run local models. Both of them work on my Ryzen CPU and on my Radeon GPU:
With ollama, there are a few web UIs similar to ChatGPT, but you can also pipe text to it from the CLI. Ollama integrates with editors such as Zed, so you can use a local model for your coding tasks.
I’d say most public channels on OFTC and Libera are worth checking out if the channel name lines up with stuff you’re into. Some channels aren’t as active as they used to be back in the old freenode days, but the ones with like 200+ users are usually still pretty active.
Ugh more advocation of non-root SSH (why?) and Fail2Ban (ewwwwwwwww)
Yes, disabling password SSH is a good idea.
Yes, firewalls can be a healthy part of balanced security breakfast, altho I wouldn’t recommend them until you get to the scale of hiring employees, having security compliance standards, that kinda thing.
But seriously, I think the risk of giving up on ever using linux because the permissions are too frustrating is way more significant for new users than the risk associated with driving around as root all the time.
Also, I’d like to echo the sentiment that SSH’s entire security model is predicated on the ASSUMPTION that the user has verified the servers SSH host public keys ahead of time (i.e., its up to you whether “trust on first use” means “verify on first use” or “hope on first use”).
This is definitely a bit of a tinfoil hat thing, but IMO its worth pointing out in an article like this.
OK but do you really think that anyone is going to make their username via hexdump-ing /dev/urandom ?
And SSH should be sealed off against credential stuffing anyways, password SSH should not be used.
Non-root SSH is an issue in itself because it significantly complicates the usability of the CLI. And considering that this article seems to be aimed at new-ish Unixy CLI users, usability challenges are a much bigger security risk !!
I.e, with non-root login, the user is more likely to fail to edit /etc/sshd/config and never be able to disable password ssh login because they can’t navigate the permissions issues. Having their first name or screen name as their username is probably not much of a credential stuffing defense compared to disabling password SSHin the first place.
Basically what I’m saying is, this whole article should really just be about disabling password SSH, complaining that cloud providers don’t always provide the SSH host public keys and hashes, and about enabling symmetric encryption on your SSH key on your client machine.
I’ve never heard anyone describe the permissions model of using sudo to perform administrative tasks complicated.
If someone is struggling with that, the advice is to learn about it not to disable it. It’s useful for auditing your own activity if you have to consciously type sudo to perform certain tasks.
There’s people who also seem to advise just running sudo for everything, but if you don’t understand why you’re using sudo then you should be pausing to think and asking someone.
Why should we normalise bad practices because people are struggling to bother to learn the good practices?
For the same reason we normalized using a GUI: It’s easier. It takes less time to learn, and it takes less time to do stuff once you have learned how to do it.
I’m not even convinced its a bad practice. It depends. If all I do is log in, edit docker-compose.yml or some systemd service unit file and then restart a service, why does it matter whether I’m logging in as root or not?
“user could more easily accidentally delete /lib/ or something if they’re always root.”
I can see that, but when we think about “security “ as managing risk, for a personal server IMO the primary risk comes from the admin themselves. I think security and usability are two sides of the same coin there. A system which is easier to use is will be less risky because the user will have more accurate & relevant information about what’s going on. They will be less likely to be deceived or overwhelmed with information that’s not relevant to what they’re trying to achieve. They’ll probably be more confident and in a better mood, too.
Mode switches in CLIs are notoriously hard for new CLI users to cope with (How to exit vim?? How to exit the pager??). sudo is another mode switch, and it takes mental effort (and critically, experience) to fully understand it and not be surprised by aspects of it. (Why can I run program x as my user but when I try to run it under sudo, it doesn’t work or can’t be found? I used sudo -i, now why is half of my shell history missing???)
So mostly what I’m saying is, at least for my imagined use-case of installing server apps on a VPS and then using that software over the network, (rather than logging into the server and using its CLI environment to mess around and do development) The place where users and permissions matter is in how the apps run, not how the user logs in.
So if we can get a massive usability win from logging in as root (removing an entire mode, PLUS mostly removing permissions issues) its definitely worth the marginally increased risk of accidental rm -rf /.
and it takes less time to do stuff once you have learned how to do it.
For most tasks I don’t think there is a GUI which is more time efficient than a terminal.
I’m not even convinced its a bad practice. It depends. If all I do is log in, edit docker-compose.yml or some systemd service unit file and then restart a service, why does it matter whether I’m logging in as root or not?
If that’s all you are using a system to do, get someone else to manage things an expose a docker-compose.yml file upload and a “restart service” button.
The solution to letting people do things on Linux when they don’t want to learn Linux is not to teach them how to disable security features. It’s to present them with limited tools which enable them to do what they need to do.
I can see that, but when we think about “security “ as managing risk, for a personal server IMO the primary risk comes from the admin themselves. I think security and usability are two sides of the same coin there. A system which is easier to use is will be less risky because the user will have more accurate & relevant information about what’s going on. They will be less likely to be deceived or overwhelmed with information that’s not relevant to what they’re trying to achieve. They’ll probably be more confident and in a better mood, too.
If someone gets in a bad mood because they have to run sudo to maintain their server then they have no business managing a server at that low a level. Maybe give them something like cpanel but more modern.
Mode switches in CLIs are notoriously hard for new CLI users to cope with (How to exit vim?? How to exit the pager??). sudo is another mode switch, and it takes mental effort (and critically, experience) to fully understand it and not be surprised by aspects of it. (Why can I run program x as my user but when I try to run it under sudo, it doesn’t work or can’t be found? I used sudo -i, now why is half of my shell history missing???)
Maybe they are difficult, I can’t remember struggling with them, regardless, it’s a one-time learning cost. You’re advocating in favour of permanent security weaknesses to mitigate a one-off learning obstacle.
So if we can get a massive usability win from logging in as root (removing an entire mode, PLUS mostly removing permissions issues) its definitely worth the marginally increased risk of accidental rm -rf /.
Every time I’ve seen someone actually use a machine as root, they end up messing up permissions in various places and weakening things such that for example a remote code execution as a lower privileged user could be abused to escalate to root. This is a real world security weakness which can be easily introduced by someone logged in as root without a real understanding of the permission model.
Moreover, every time I’ve seen someone use a machine as root without the necessary awareness of why this is generally considered a bad idea, they’ve never really learned the permission model. They still reached for “sudo” to run any failing command. It’s not a good idea to cater for people like that, because if they’re unwilling to learn then making it so that they won’t have to learn won’t make things better.
Some gates are best kept locked if you’re too impatient to read the instructions on where to find the key.
Bad practice implies some kind of consensus. One guy’s blog post isn’t equivalent to consensus.
Yes, sudo is over-complicated, that’s why I personally don’t use it in favour of doas (which the author misrepresents as something which “still implements most of a sudo-style rules language” which is so far from reality that I have to assume the author doesn’t actually know just how horrific the sudo rule language is).
But bad practice? No, definitely not. In your informal environment sudo is probably configured poorly and the auditing capability it offers isn’t actually fed into anything which could analyse it. But in any non-amateur setup it’s common to actually use the auditing and the permissions rules of tools such as doas and sudo to implement proper access control.
I don’t understand what disallowing logging in as root is supposed to achieve.
What’s the real difference between cracking into a sudo-enabled account vs root? If you have SSH password login enabled, you’re screwed either way, because if you know the password, you can log in and use sudo.
This blog post rightly suggests disabling password auth, so that’s not a threat in our case.
So if you disable logging in as root, you will now need both an SSH key and the password to conduct administrative actions.
But is cracking an SSH key even a practical threat to worry about in the first place? (Assuming your private key is encrypted with a passphrase.)
I don’t understand what disallowing logging in as root is supposed to achieve.
[…]
But is cracking an SSH key even a practical threat to worry about in the first place?
Also said this in another comment, but if there’s a OpenSSH vuln (or backdoor like xz) the attacker doesn’t instantly get root. ssh user followed by su root can also be seen as 2FA for root.
You should always log into a host with the least amount of privileges possible and only escalate to superuser when you actually need it. I quite often log into my personal hosts just to check something and don’t need root privileges to do it. if you are root all the time, you needlessly increase the “blast radius” if you accidentally do something stupid.
In a production environment, all actions should be logged for auditing purposes and no one should be allowed to become root without an emergency break-glass procedure that sends out an alert. Instead, users always log in with their own unique username and use sudo (or something like it) to run commands with escalated privileges.
all actions should be logged for auditing purposes and no one should be allowed to become root without an emergency break-glass procedure that sends out an alert. Instead, users always log in with their own unique username
Are you thinking of a more “enterprise” context than an individual’s “$5 VPS … with a budget VPS provider”?
Well maybe the key is not encrypted with a passphrase, or maybe it gets leaked and the passphrase guessed, or maybe you misconfigured something and accidentally allowed password logins so the root password can be brute-forced, or anything else.
There’s no disadvantage to disabling root logins in the majority of circumstances - most times I’ve wished for it have been oops moments where I’ve screwed something up and need to rescue, and there have always been better alternatives such as using the provider’s console.
I already want that separate user (so that I don’t make a mess by accident). Then since I’m already there, I’ll just also flip root to no, so that a script-kiddy doesn’t get lucky by accident.
I know the author touches on this, but for the sake of my own soapbox: “One process per container” was never an actual rule, just dogma perpetuated by people with little real-world devops experience. Yes, MOST containers need only run one process (an application server, typically) but occasionally you need to run something where it is either impossible or needlessly fragile to break it up into multiple containers. Especially if your application might spawn its own processes and does not contain code to do typical PID 1 things, then it should be managed by something that can, like tini, or this shell script.
The actual rule is, “a container should do only one thing.” This is entirely analogous to modularity in code, where a function (ideally) only does one thing but occasionally it makes more sense to just make it do two when the alternative is torturous design.
I think 20 years ago the Linux Desktop worked better than today. KDE 3 was really nice. Audio still worked. Wifi wasn’t as much of a must have yet. There were some companies porting games to Linux. Distros weren’t constantly tracking and trying to monetize you. There was no in-fighting regarding init systems. You didn’t have that mess of package manger + snap + flatpak. Third party stuff usually compiled per default with ./configure && make && make install. Even Skype just works. The community was a lot less made up of self-promoters. Instead you got quick competent help, just like as a Windows user at the time.
People’s main complaint was that there wasn’t any good video editing and graphics software. Wine worked okay-ish. (for some stuff one used commercial offerings)
The only thing that was a bit messy depending on the distribution were NVIDIA drivers. Maybe Flash Videos, but I think they actually worked.
It even was so good that without much technical background or even English skills one could not just use Linux, but get an old computer, install NetBSD and happily use it on the desktop.
I think the average today (let’s say Ubuntu and Manjaro for example) is that you’ll spend months to glue something up that you can maybe live with. I think parts of the Linux desktop is using design principles that are technically nice, but give users a hard time. An example is that creating something like a short cut used to be easy in desktop environments. Today there is a standard for applications, which is nice, but bugs the user that just wants to create a shortcut. I am not sure what happened to GUIs for creating .desktop files?
I don’t know if it’s fair to say that it was better. Lots of modern problems have their back-in-the-day equivalents. You didn’t have to fight with package manager + snap + flatpak but pre-yum RPM was a pain. Lots of third-party stuff was compiled with imake and haha good luck running it on anything except Red Hat Linux 9.0 or whatever. Skype just worked insofar as the microphone worked, which wasn’t always a given.
What is disappointing and, I think, fair to say, is that we don’t have twenty years’ worth of bugfixes and improvements in today’s desktop stack. For example, there are very few components in Plasma 6 today that haven’t been rewritten since KDE 3, to the point where Plasma 6 is a bit of a misnomer – much of it is actually on its second major release, and it shows. We are, at best, about 10-15 years into the lifetime of current desktop technologies which is why, adjusting for the significantly increased complexity, in terms of stability and capabilities, we’re not much further than where we were in 2007 or so.
I think this ritual burning of all existing technology (wasn’t the first time it happened, either; Gnome 2 and KDE 3 both significantly departed from their predecessors) came at a particularly bad time, because it roughly coincided with the period in which lots of people lost interest in desktop development.
20-25 years ago cool kids dreamed of writing a better window manager, or file manager, or browser, because that’s what was hot. 10-15 years ago cool kids were writing phone apps; if they used Linux, they wrote web apps. So there weren’t as many fresh ideas (and fresh heads) going into desktop development. That made desktop development both slower, as platform complexity grew in every aspect, from font rendering to hardware-accelerated drawing, and much quicker than spare development time, and more divisive.
What is disappointing and, I think, fair to say, is that we don’t have twenty years’ worth of bugfixes and improvements in today’s desktop stack. For example, there are very few components in Plasma 6 today that haven’t been rewritten since KDE 3, to the point where Plasma 6 is a bit of a misnomer – much of it is actually on its second major release, and it shows.
It was. But that’s why I simply didn’t use RPM based systems. Never understood why people like to go through that pain.
Lots of third-party stuff was compiled with imake and haha good luck running it on anything except Red Hat Linux 9.0 or whatever.
Huh? What software?
Skype just worked insofar as the microphone worked, which wasn’t always a given.
As mentioned, never had a problem with that ever. I mean it. I met my girlfriend online during that time. Skype calls even very long ones never had a problem and she was the first person I’d use Skype with. Since that’s how I got to know my girlfriend I know that time vividly. Meanwhile I constantly run into oddities with Discord, Slack and Teams, if they even work at all. Again, not even using Bluetooth. Just an audio jack, so the setup should be simple.
I think this ritual burning of all existing technology (wasn’t the first time it happened, either; Gnome 2 and KDE 3 both significantly departed from their predecessors) came at a particularly bad time, because it roughly coincided with the period in which lots of people lost interest in desktop development.
Not intending to burn existing technology. I have no reason to. Just stating that things that used to work don’t work anymore. Can have lots of (potentially good) reasons. I just also think that the idea that software magically is better today somehow is wrong. And a lot of claims “you just remember wrong” are very very wrong. Same thing with games by the way. I put that to the test. Claims like “You are just older and as a child things are more exciting” are simply wrong in my instances. Just like going back to old utilities. I have no interest of putting new software down in any way. I hate that old vs new software farce. It’s simply whether stuff works or doesn’t.
I’d argue that there is currently not much going on on the Linux desktop side. There are good reasons. People don’t use desktops as much anymore. And they aren’t their main focus. People have phones, apps, smart tvs, etc. Lots of people that would have run Linux back in the day now run macOS. It is a known fact that less people work on desktop environments and when stuff gets more complex, one needs to support a lot more things. On top of that all the development moved into the browser over the last two decades. People don’t really create desktop applications and by extension desktop environments anymore.
So of course if developer focus shifts other stuff will be better in the open source world. Non-tech people can run LLMs on their home PCs when they invest 15 minutes. People share their video collection. There is open source social media that is actually used. Graphics software is a ton better. With Godot there is a really great game engine. People create programming languages. LLVM is amazing. There are finally hobby OSs (Serenty, etc.) again. All of these are really great.
Just that my personal desktop experience was better back then and I think a really big reason for that is that the focus of a desktop was more narrow.
It was. But that’s why I simply didn’t use RPM based systems. Never understood why people like to go through that pain.
You can always not use Snap or Flatpak today. Doesn’t mean no one does, just like it didn’t mean no one used RPM back in the day, too. I don’t use either and I’m definitely happier with my packaging experience than back in 2004-2006-ish (which I’m guessing is the period you mainly have in mind, based on Skype?). (Edit:) I didn’t use RPM-based distros back then, either, so it’s not because of yum :-).
Huh? What software?
The ones I remember most vividly are Maya, Matlab, and… pretty much any GIS software at the time. Same as above – maybe you didn’t use them, doesn’t mean no one needed them. Any desktop will work flawlessly if you only pick applications that happen to work flawlessly :-).
That makes sense. You are right, but you brought up RPM, that’s why my response was about how I never understood it. :-)
Probably I got lucky with software then. I wasn’t use any of this. I got into GIS a bit later, so looks like I just avoided that or package mangers abstracted it away for me.
I think this ritual burning of all existing technology …
Not intending to burn existing technology.
I think x64k was referring to the desktop environment developers as “burning … existing technology” by rewriting their components rather than polishing the old, working ones that you liked.
For example, there are very few components in Plasma 6 today that haven’t been rewritten since KDE 3, to the point where Plasma 6 is a bit of a misnomer – much of it is actually on its second major release,
The interesting thing is that the macOS has still been using the same core tech for the desktop environment during that time period (Quartz, Cocoa, etc.), though recently things like SwiftUI were introduced. I wonder if Apple just got it almost right the first time because they had more experience than the KDE/GNOME folks or whether open source desktops are more affected by every generation wanting to leave their mark/brush up their resumés.
Though I think that an important difference between KDE and GNOME is that KDE development is also more driven by Qt. At some point the owner (I think it was still Troll Tech at the time) of Qt said: Qt Widgets are now legacy and won’t see new development, and KDE had to rebase on QML/Quick, resulting in Plasma.
I wonder if Apple just got it almost right the first time because they had more experience than the KDE/GNOME folks or whether open source desktops are more affected by every generation wanting to leave their mark/brush up their resumés.
It’s a combination of factors, really. Nautilus, for example, was literally written by ex-Apple people, and had many similarities both to file managers of that era and to Finder in particular. So there was certainly no shortage of people to get it right the first time.
IMHO a bigger factor, especially in the last 10-15 years, was the FOSS desktop development is a lot more open-loop. People come up with designs and they just run with them, which is a lot easier to do when you don’t have to worry about supporting multiple install bases, paying customers ditching you and so on.
That’s very useful in many ways but one unfortunate side-effect is that, for all their combative attitude towards closed platforms, major FOSS desktop projects relentlessly chase every fad in closed platforms, too, including the ones that just don’t make sense, or in interpretations that just don’t make sense, like app stores (and I’m not referring to Flathub here; ironically, I think that’s the only platform of this sort that actually makes some sense, at least there’s a special packaging and distribution technology behind it; I’m thinking more of things like the Ubuntu Software Center). App store-like platforms could fulfill a very useful social role, serving e.g. as platforms for donations, bug bounties, feedback etc. – but instead they act just like their closed-source counterparts, with nothing but votes and ratings, to the point where they’re just Synaptic with votes and weird package management bugs.
~20 years ago (my memories might be slightly off), desktop Linux had recently gained the ability to automatically write your X11 modelines, but I still had to manually write my touchpad config. For audio, ALSA was the thing, but also ARTS was a thing, and also ESD was a thing, and OSS was still a thing, and audio worked if you only had one program or sound server having exclusive control of your sound device because anything else was a slow descent into madness.
My goggles are absolutely not rose-tinted, and I’ll recommend a current-day bog standard Ubuntu install any day of the week because of the sheer size of the user base and the amount of info/software targetted towards that ecosystem. And if you are tired of Canonical going their own way every other year, Debian is a fine replacement that mostly works the same way.
~20 years ago (my memories might be slightly off), desktop Linux had recently gained the ability to automatically write your X11 modelines
Oh, yeah, that was a major inflection point in the community as well, as after that point there was no point in running xf86configxorgconfig so it suddenly became impossible to shame the noobs who used that instead of editing XF86Config by hand like Real Men.
For audio, ALSA was the thing, but also ARTS was a thing, and also ESD was a thing, and OSS was still a thing, and audio worked if you only had one program or sound server having exclusive control of your sound device because anything else was a slow descent into madness.
ALSA gained softmixing (and wide hardware support) kind of late, which is what made sound servers useful for a while – and also what led to the following truly hilarious, and quite puzzling bug for newbies.
If ARTS couldn’t claim exclusive control over the soundcard – because, say, XMMS had exclusive control over it through its ALSA output plugin – it didn’t play anything, but it did continue to buffer whatever you sent to it, and would begin to play it as soon as it could claim control over the soundcard. I learned that when I began using kopete (which, for a while, had the best Yahoo! Messenger support) on a fresh install. I hadn’t changed XMMS’ output plugin to ARTS, so none of the “Ping!“s made it to the sound card…
…until I stopped XMMS, at which point ARTS faithfully played every single Kopete alert it had received in the last hour or so (or however long its ring buffer was).
This was actually worse than it sounds – in this case it was just a particular configuration quirk, but the real problem was that not all software supported all sound servers, or at least not very well. E.g. gAIM (which later became Pidgin) supported ARTS but it was a little crashy, and in any case, ARTS support became mainstream among non-KDE software relatively late in the 3.x cycle. Even as late as 3.2, I think, it was just more or less a fact of life that you had one or two applications (especially games) where you kind of lived with the fact that they’d take over your soundcard and silence everything else for a while. Truly a golden age of desktop software :-).
Some things the younger generation of Linux users may not remember:
Wine ran a bunch of games surprisingly well, but installing games that came on several CDs (I remember Jedi Academy) involved an interesting gimmick. I don’t recall if this was in the CD-ROM (ever seen one of those :-)?) drivers or at the VFS layer but in any case, pre-2.6 kernels took real care of data integrity so, uh, you couldn’t eject the CD-ROM tray if the CD was mounted. And you couldn’t unmount it because the installer process was using it. At some point there was a userspace tool that took care of that (I forgot the name, this was 20 years ago after all). Before that, yep, you had to compile your own kernel with some cool patches. If you had the hard-drive space it was easier to rip the CDs (that was kind of magic; you could just dd or cat from /dev/cdrom to a file) and mount all of them, but not everyone had that kind of space.
If you had a winmodem, you were usually doomed. However, “real” modems were really expensive and, due to the enormous success of winmodems, they got kind of difficult to find by the early ’00s.
Hardware support lag was a lot more substantial than today. People really underestimate how important community growth turned out to be. When I finally got a “real” computer I ran it with the hard drive in compatibility mode because it took forever for Linux to get both proper S-ATA support and support for… I think OCH5 it was?
Actually, I loved that behavior and would use it deliberately to queue things. I resisted moving to ALSA for quite some time because I liked how things blocked. Oh well.
(I started with Linux in 2004 btw, not before. It was solidly ok, I think I dodged much of the pain people talk about/)
Well, sure, it was fun if it was the IM client that blocked. Not as fun when it was an actually important alert, or when the browser queued some earbleed noise from a Flash intro, or when you couldn’t listen to music or watch a movie until you quit the offending app.
ALSA gained softmixing (and wide hardware support) kind of late, which is what made sound servers useful for a while – and also what led to the following truly hilarious, and quite puzzling bug for newbies.
ALSA softmixing STILL doesn’t work. I tried running a server-less setup a few years ago. Applications just took exclusive control anyway.
It works just fine, I still use it today. You might need to configure it though; distro configs based on PulseAudio tend not to enable it since they assume PA is doing it anyway.
/etc/asound.conf will define things based on dmix (for playback) and dsnoop (for recording) if alsa mixing is enabled. and the pcm.default will refer to that pseudodevice instead of the hardware.
It’s been a while so I don’t remember the details, but I think what happened was that apulse wasn’t really working, so I used the built in ALSA support in Firefox (code is still there last I checked, just disabled in the default build config) which took exclusive control.
I don’t know how that’s handled post-PulseAudio or how well it works. But I am 100% sure it worked. The only reason I stopped using it was that PulseAudio became a dependency pretty much everywhere so yanking it out and dealing with the fallout was about as much trouble as using it.
I find this to be just not true. While Linux today is undoubtedly comprised of more complex subsystems like pipewire and systemd, it allows you to effortlessly use bluetooth headsets, play hiend games with comparable performance and even (and this was unthinkable back then) do music production with incredibly full featured software. Maybe the simplicity of yore was enjoyable but linux today is a lot more capable.
I think you have a pretty skewed picture and I’m happy it worked that well for you back then.
I certainly had linux on the desktop in the late 90s, but it just wasn’t great. Our shared computer pool at university (I started in 2003) worked perfectly fine, but it was curated hardware and some people put in some real effort for it.
I bought my first laptop in 2004, a friend had the predecessor model, that’s why I knew it was relatively ok for Linux… and yet I moved to FreeBSD because of a couple things (one of them wifi) it just wasn’t great if you wanted to have it “just work”[tm].
Compare to today, people were kinda surprised when I said that I have no sound on my desktop, although games run at 120 FPS out of the box with the 3070. Turns out it’s a commonly known problem with this exact mainboard chipset, and plugging in the only USB sound card I ever owned.. it just works. All I’m saying is that I have not had proper problems for about 10 years (which weren’t solved easily) - but earlier than like 15 years ago everything took a lot of time to get running smoothly… that’s my experience.
FWIW, I agree much more with your original post than with the comment I replied to.
I guess my main point is that while I’m not averse to configuring stuff, I’ve always held the view that you should be able to do it in a reasonable time with a modest amount of knowledge. And very often the drivers simply weren’t there, so without switching hardware you were just out of luck, and it was not rare.
And what makes you think that? Sounds a bit like an idle claim. ;)
I certainly had linux on the desktop in the late 90s
20 years ago was 2004, not the 90s. I used Linux as my main OS back then. Was shortly before I had a long run of NetBSD. Double checked some old emails, messages.
but it was curated hardware
Mine was on a system that was by a local hardware store owned brand. That NetBSD was installed on a computer from a government organization sale.
Compare to today, people were kinda surprised when I said that I have no sound on my desktop
I do. Today audio burns through CPU cycles, is wonky, on some system out of the box has buffer underruns, randomly kills YouTube, comes out of my speakers rather than my (audio jacked, so not even bluetooth) when I reboot. I never used to have any audio problems back then. Not with games, not with Skype.
games run at 120 FPS
Meanwhile my games (and I played a lot of games back then being young) need stuff like gamemoderun to be playable. Back then Enemy Territory (with so many mods), Majesty, Neverwinter Nights, etc. as well worked out of the box.
Of course it’s my experience, and my view was shared by about basically everyone I know who ran Linux on the desktop back then. I didn’t say it was terrible or your overall point is wrong. I just don’t believe that many people thought that everything was just fine and worked by default for the majority.
Maybe I’m focusing too much on getting stuff to run at all (which sucks if anyone changed anything in the kernel or in general upstream), and you’re focusing too much on problems today. It’s never perfect ;)
Now that KDE is stable again, I think we’re just about back to where we were 20 years ago. Only it’s Zoom instead of Skype. And nVidia drivers are still buggy. :)
I agree a bit, but I feel like Linux back then was generally more work - if nothing else, it required a lot more understanding of how it worked.
Very few people back then were running Linux in a VM, and they certainly weren’t using WSL or a container - most people had to install it on real hardware, and there was usually a bit of learning curve just getting the system to boot into Linux for the first time and getting drivers setup.
I’m currently running Debian on an old MacBook Pro, and it reminds me a lot of using Linux 20 years ago. Everything’s working - I can video chat with Microsoft teams, I have accellerated 3D graphics (with NVidia until a few months ago), etc., but it was work to get it up and running. Proprietary drivers had to be tracked down, some special kernel modules had to be built from source, some magic incantations had to be added to the kernel command line, etc.
Nowadays when I have to do that, it’s a real inconvenience - 20 years ago it was just expected.
Audio works better today than it ever has on Linux.
There was no in-fighting regarding init systems.
There is no real actual in-fighting regarding init systems today. There are three groups: those who just use systemd, those who have reasons to not use systemd and aren’t weird about it, and losers who no one cares about.
Actually, since you mention both tracking and snap, how many of the problems you have are just Ubuntu-specific, not modern desktop Linux specific?
There is no real actual in-fighting regarding init systems today. There are three groups: those who just use systemd, those who have reasons to not use systemd and aren’t weird about it, and losers who no one cares about.
That makes sense. I think the reason is the same as for RPM was back then though. There is that stuff that big companies introduce and the smaller players kind of have to adapt, because being able to pay means you can outpace them. Some people aren’t happy about that. It might be why they switch away from Windows for example. While I think there is a lot of people that fight some ideological fight and systemd is the target, I’d argue that even the “losers” will give you a reason. Whether it’s a good reason or not is a different question of course.
Actually, since you mention both tracking and snap, how many of the problems you have are just Ubuntu-specific, nor modern desktop Linux specific?
Audio and Video are my primary ones. I am mainly on Arch, assuming I made mistakes only to find out that when I get a work/client laptop with Ubuntu, etc. it also has issues, even though I am not sure they are related. Different OS, different issues.
Most recent: Manjaro. Thinkpad T14s. Out of the box. My stuff moves to a different screen simply because my monitors goes to sleep. Sometimes playing videos on YouTube freezes the screen it’s playing on for a couple of minutes. Switching my audio output does work sometimes, sometimes not.
I have had the freezing on Ubuntu (which was the standard by the company) before. Instead of stuff moving to the other screen I had instances, where I had graphical artifacts. And instead of audio output not switching when explicitly selecting it I had issues with it not switching when I start the system with headphones already plugged in.
20 years ago I was able to get stuff done without such issues.
I also don’t have issues on other OSs, not on Windows, not on OpenBSD. I checked during debugging.
I am not the only one with those issues, however fixes don’t work. No specific errors in journalctl/dmesg. People have been reporting these issues of course. Some had other causes. Some changed window manager, some switched hardware, some switched between wayland/xorg (both ways actually), etc.
I have hopes that these will be fixed eventually, but the whole point of the above was that for my use cases 20 years ago the average Linux distribution of the time did a better job of what I expect. Of course the story might be different for other people, but I don’t think I need to mention that.
Tired of being an unpaid Microsoft support technician, I offered people to install Linux on their computer, with my full support, or to never talk with me about their computer any more.
The more time went by, the more I realised that this state of mind was particularly toxic and ultimately disrespectful of the real needs of the people around us.
I’m using the word ‘we’ here because obviously, I also had this approach at the time (admittedly, a few years later, being a bit younger), but I’m a bit ashamed of the approach I had at the time and today I have a deep rejection for this way of behaving towards a public who often use IT tools for specific needs and who shouldn’t become dependent on a certain type of IT support that isn’t necessarily available elsewhere.
Who are we to put so much pressure on people to change almost their entire digital environment? Even more so at a time when tools were not as widely available online as they are today.
In short, I’m quite fascinated by those who* are proud to have done this at the time, and still are today, even in the name of ‘liberating’ (often in spite of themselves) users who don’t really understand the ins and outs of such a migration.
[*] To be clear, I’m not convinced, given the tone of the blogpost, that the author of this blog post does!
The more time went by, the more I realised that this state of mind was particularly toxic and ultimately disrespectful of the real needs of the people around us.
Can we please stop throwing around the word “toxic” for things are totally normal human interactions. Nobody is obliged to do free work for a product they neither bought themselves nor use nor like.
The “or never talk to me about your computer anymore” if you don’t run it the way I tell you to part, is, IMO, not normal or nice. I’m not sure I’d have called it toxic, but I’d have called it unpleasant and insensitive.
Of course nobody is obliged to do free work for a product they don’t purchase, use or like. That’s normal. But you can express sympathy to your friends and family who are struggling with a choice they made for reasons that seemed good or necessary to them, even if you don’t agree that it was a good choice. It’s normal listen to them talk about their challenges, etc., without needing to solve them yourself. You can even gently remind them that if they did things a different way, you could help but that you don’t understand their system choice well enough to help them meet their goals with it.
The problem is telling a friend or loved one not to talk to you about their struggles. Declining to work on a system you don’t purchase, use or like, is of course normal and not a problem.
I’ve used Linux on my own machines exclusively since 1999, and when I get asked to deal with computer problems (that aren’t related to hardware, networking or “basic computer literacy” skills) I can’t help with, I’ll usually say something along the lines of “you know, you actually probably know more about running a Windows machine than I do” - which doesn’t usually get interpreted as uncaring or insulting, and is also generally true.
If you buy a car that needs constant repairs you get rid of it and buy something else that does not require it. There is no need to sit with family/friends and discuss their emotianal journey working with a Windows computer. it is a thing. If it is broken have it repaired or buy something else.
If you buy a car that needs constant repairs you get rid of it and buy something else that does not require it.
You might. Or you might think that even though you’ve had to fix door closing sensor on that minivan’s automatic door 6 times now, no other style of vehicle meets your family’s current needs, and while those sensors are known to be problematic across the entire industry, there’s not a better move for you right now. And the automatic door closing function is useful to you the 75+% of the time that it works.
And you still might vent to your friend who’s a car guy about how annoying the low quality of the sensor is, or about the high cost of getting it replaced each time it fails.
Your friend telling you “don’t talk to me about this unless you suck it up and get a truck instead” would be insensitive, unpleasant and might even be considered by some to be toxic.
It’s not an emotional journey. You’re not asking your friend to fix it. You’re venting. A normal response from the friend would be “yeah, it’s no fun to deal with that.” Or “I’d know how to fix that on a truck, but I have no idea about a minivan.”
–
edit to add: For those who aren’t familiar with modern minivans, they have error prone sensors on the rear doors that are intended to prevent them from closing on small fingers. To close the doors when those fail, it’s a cumbersome process that involves disabling the automatic door function from the driver’s area with the car started, then getting out and closing the door manually. It’s a pain, and if your sensor fails and your family is such that you use the rear seats regularly, you’ll fix it if you value your sanity.
As a sometimes erstwhile unpaid support technician, I vehemently disagree.
I fully admit that sometimes I stepped into that unpaid support technician role when I could have totally, in a kind, socially acceptable way, said “Wow, it’s miserable that your computer broke. You should talk to {people you bought it from}. I can tell you a lot about computing in general, but they’ll know a lot more about Windows than I would.”
And it would’ve been OK, because the people telling me about their problems were mostly venting, not really looking for a solution from me.
But as a problem solver, I’m conditioned to think that someone telling me about an issue is looking for a solution from me. That’s not so; it’s my bias and orientation toward fixing this kind of thing that makes me think so.
Thank you, you’ve put into much better words what I wanted to say than the adjective ‘toxic’, which was the only one I had to hand when I wanted to describe all this.
How on hell could it be considered as toxic to refuse to support something which is against your values, which requires you a lot of work, which is unpaid while still offering to provide a solution to the initial problem ?
All the people I’ve converted to Linux were really happy for at least several years (because, of course, I was not migrating someone without a lot of explanations and without studying their real needs).
The only people who had problem afterward were people who had another “unpaid microsoft technician” doing stuff behind my back. I mean I had been called by a old lady because her Linux was not working as she expected only to find out that one of her grand-children had deleted the Linux partition and did a whole new Windows XP install without any explanation.
First of all it is obviously your choice whether you want to give support for a system you don’t enjoy and may not have as much experience with. Especially when you could expect the vendors of that system to help, instead of you.
But the second part is how you express this: You are - after all - the expert getting asked about providing support. And so your answer might lead them down a route where they choose linux, even though it is a far worse experience for the requirements of the person asking for help.
The last point comes from the second: You have to accept that installing linux is for those people not something they can support on their own. If they couldn’t fix their windows problems, installing linux will at best keep it to the same level. Realistically they now have n+1 problems. And now they are 100% reliant upon you - the single linux expert they actually know for their distribution. And if you’re not there, they are royally fucked with getting their damn printer running again. Or their nVidia GPU freezing the browser. Or Teams not working as well with their camera. In another context you could say you secured your job. And if only because updates on windows are at least 100% happening, which is just not true on linux.
I have seen people with a similar attitude installing rolling releases for other people while disabling updates over more than 6 months, because they didn’t have the time to care about all the regular breakage. And yes that includes the browser.
And the harsh truth is that for many people that printer driver, MS Office, Teams + Zoom and Camera is the reason they have this computer in the first place. So accepting their needs can include “Sorry I am not able to help with that” while also accepting that even mentioning linux to them is a bad idea.
Because it’s a nice thing to do for your family and friends and they’ll likely reciprocate if you need help with something different. Half of the time when I get a “tech support” call from my aunt or grandparents, it’s really just to provide reassurance with something and have a nice excuse to catch up.
Mine was of wasting hours trying to deal with issues with a commercial OS because, despite paying for it, support was nonexistent.
One example: Dell or Microsoft (unsure of the guilty party) pushed a driver update that enabled power saving on WiFi idle by default. That combined with a known bug on my MIL’s WiFi chipset, where it wouldn’t come out of power saving mode. End result was the symptom “the Internet stops working after a while but comes back if I reboot it”.
Guess how much support she got from the retailer who sold her the laptop? Zip, zero, zilch, nada.
You’re not doing free technical support for your relatives, really: you’re doing free technical support for Dell, and Microsoft, and $BIG_RETAILER.
When Windows 11 comes around (her laptop won’t support it) I’m going to upgrade the system to Mint like the rest of my family :) If I’m going to donate my time I’d rather it be to a good cause.
Yes, that was never my experience, and if it had been I would be inclined to agree with you. These days I hear more of “why did I run out of iCloud storage again” or “did this extortion spammer actually hack my email,” which I find less frustrating to answer :)
Yeah it doesn’t matter for generic tech support, in my experience, what OS they’re running.
It’s just the rabbit holes where it’s soul destroying.
Another example was my wife’s laptop. She was a Dell XPS fan for years, and ran Windows. Once again a bad driver got pushed, and her machine took to blue-screening every few minutes. We narrowed it down to the specific Dell driver update. Fixed it by installing Mint :)
Edit: … and she’s now a happy Ryzen Framework 13 user. First non-XPS she’s owned since 2007.
Ugh. It’s not “toxic” to inform people of your real-world limitations.
My brother-in-law is a very experienced mechanic. But there are certain car brands he won’t touch because he doesn’t have the knowledge, equipment, or parts suppliers needed to do any kind of non-trivial work on them. If you were looking at buying a 10-year-old BMW in good shape that just needs a bit of work to be road-worthy, he would say, “Sorry, I can’t help you with that, I just don’t work on those. But if you end up with a Lexus or Acura, maybe we could talk.” He knows from prior experience that ANY time spent working on a car he has no training on would likely either result in wasted time or painting himself into an expensive corner, and everyone involved getting frustrated.
Similarly, my kids would prefer to have Windows laptops, so that they could play all the video games their peers are playing. However, I just simply don’t know how to work on Windows. I don’t have the skills or tools. I haven’t touched Windows in 20 years and forgot most of what I knew back then. I don’t know how to install software (does it have an app store or other repository these days?), I don’t know how to do back ups, I don’t know how to keep their data safe, I don’t know how to fix a broken file system or shared library.
But I can do all of these things on Linux, so they have Linux laptops and get along just fine with them.
Edit: To color this, when I was in my 20’s, I tried very hard to be “the computer guy” to everyone I knew, figuring that it would open doors for me somehow. What happened instead was that I found myself spending large amounts of my own free time trying to fix virus-laden underpowered Celerons, and either getting nowhere, or breaking their systems further because they were already on the edge. Inevitably, the end result was strained (or broken) relationships. Now, when I do someone a favor, I make sure it is something that I know I can actually handle.
But he didn’t force anyone, he clearly says that if those people didn’t want his help, he could just leave it the way it was. To me that’s reasonable - you want my help, sure, but don’t make me do something I’m personally against. It’s like, while working in a restaurant, being asked to prepare meat dishes when being a vegetarian, except that my example is about work and his story is about helping someone, so there’s even less reason to do it against his own beliefs.
From my experience being an unpaid support technician for friends and family, that’s the only reasonably approach. I had multiple situations when people called me to fix the result of someone else’s “work” and expected me to do it for free. It doesn’t work that way. Either I do it for free on my own terms, or you pay me the market rate.
Some examples I remember offhand. In one instance, I tried to teach a person with a malware-infested Windows some basic security practices, created an unprivileged account, and told them how to run things as administrator if they needed to install programs and so on. A few weeks later I was called to find the computer malware-infested again, because they asked someone else to help and he told them that creating a separate administrator account was “nonsense” and gave the user account administrator rights. Well, either you trust me and live more or less malware-free or you trust that guy and live with malware.
In another instance, I installed Linux for someone and put quite some effort into setting things up the way the person wanted. Some time later, they wanted some game but called someone else instead of me to help install it (I almost certainly would be able to make it run in Wine). That someone wiped out all my work and installed Windows to install that game.
People expecting you to be their personal IT team for free just because you “know computers” is just as disrespectful. I don’t think it’s unfair to tell people “no if you want help with your windows system you need to pay someone who actually deals with such things”
The more time went by, the more I realised that this state of mind was particularly toxic and ultimately disrespectful of the real needs of the people around us.
This is looking at the things with the current context. Windows nowadays is much more secure and you can basically leave a Windows installation to a normal user and not expect it to explode or something.
However at the time Windows was still the kind of operational system that if you put it on internet without the proper updates, it would be instantly be infected by malware 1. Most users run with admin accounts and it was really easy to get a malware installed by installing a random program, because things like binaries signatures didn’t exist yet. There were also no anti-malware installed by default in Windows, so unless you had some third-party anti-malware installed your computer could quickly become infested with malware. And you couldn’t also just refresh your installation by clicking in one button, you would need to actually format and reinstall everything (that would be annoying because drivers were much less likely to be included in the installation media, so you would need to have another computer that had an internet connection since the freshly installed Windows wouldn’t have any way to connect to internet).
At that time, it would make much more sense to try to convince users to switch to Linux. I did this with my mom for example, switching her computer to Linux since most things she did was accessing the internet. Migrating her to use Linux reduced the amount of support I had to do from once a week to once a month (and instead of having to fix something, it would be in most cases just to update the system).
It should be added that if you helped someone once with his Windows computer, you were considered responsible of every single problem happening on that computer afterward.
In some cases, it was even very strong problem (I remember a computer which was infected by a malware that dialed a very expensive line all the time. That family had a completely crazy phone bill and they had no idea why. Let assure you that they were really happy with Linux for the next 3 or 4 years)
It should be added that if you helped someone once with his Windows computer, you were considered responsible of every single problem happening on that computer afterward.
Very much that. It was never the user fault, even if you left the computer in pristine condition, if they had an issue in the same week it was your fault and you would need to fix that.
However at the time Windows was still the kind of operational system that if you put it on internet without the proper updates, it would be instantly be infected by malware.
At the same time, however, it was also much more likely that you needed to deal with an application that would only run on windows, a file format that could only be roundtripped by such an application, a piece of hardware that only worked on windows (remember winmodems? scanners sucked, too, and many printers were windows GDI only), etc.
So convincing someone to use Linux was more likely to cause them a different kind of pain.
Today, most hardware works reasonably with Linux. Printers need to work with iPhones and iPads, and that moved them off the GDI specific things that made them hard to support under Linux. Modems are no longer a thing for most people’s PCs. Proton makes a great many current games work with Linux. Linux browsers are first class. And Linux software handles most common file formats, even in a round trip, very well. So while there’s less need to switch someone to Linux, they’re also less likely to suffer if you do.
That said, I got married in 2002. Right after I got married, I got sent on a contract 2500 miles away from home on a temporary basis. My wife uses computers for office software, calendar, email, web browsing and not much else. She’s a competent user, but not able to troubleshoot very deeply on her own. Since she was working a job she considered temporary (and not career-track) at home, she decided to travel for that contract with me, and we lived in corporate housing. Her home computer at the time was an iMac. It wasn’t practical to bring that and we didn’t want to ship it.
The only spare laptop I had to bring with us so she had something to use for web browsing and job hunting on the road didn’t have a current enough to be trustworthy windows license, so I installed Red Hat 7.3 (not enterprise!) on there for her. She didn’t have any trouble. She’d rather have had a Mac, but we couldn’t have reasonably afforded one at the time. It went fine, but I’d never have dared to try that with someone who didn’t live with me.
At the same time, however, it was also much more likely that you needed to deal with an application that would only run on windows, a file format that could only be roundtripped by such an application, a piece of hardware that only worked on windows (remember winmodems? scanners sucked, too, and many printers were windows GDI only), etc.
Yes, but it really depends on the kinda of user. I wouldn’t just recommend Linux unless I knew that every needs from the user would fit in Linux. For example, for my mom, we had broadhand Ethernet at the time, our printer worked better on Linux than Windows (thanks CUPS!), and the remaining of her tasks were basically done via web browser.
It went fine, but I’d never have dared to try that with someone who didn’t live with me.
It also helped that she lived with me, for sure ;).
Try to avoid running two processes in the same pod. There are light weight images that can take only a few MBs to run. You wont need a full fledged init system.
NodeJS proposes tini to run NodeJS. Tini is an init system for containers. You can use tini to run multiple processes inside a pod without having to handle signal propagation yourself.
Some applications are split in multiple processes and still need to be in the same PID namespace or even on the same filesystem to work.
For example, I have an NGINX docker image that runs:
NGINX (well duh)
NGINX Prometheus Exporter
A program that receive configuration from our CMDB via webhook and use it to regenerate the nginx.conf and then calls nginx -s reload
This is 3 processes that are tightly coupled. While it is possible to put them in separate Docker images, it would make orchestrating them overly complex.
The Docker image is a single “deployment unit”. Some “units” are made of more than one process.
The reason I care about running multiple processes in a single container is that there are numerous hosting providers these days that charge on a per-container basis.
Google Cloud Run and https://fly.io/ are two examples that I use a lot myself already.
They’re not exactly the same as regular Docker containers - they implement patterns like scale-to-zero, and under the hood they may be using some custom platform such as Firecracker - but the interface they provide to you asks you to build a container to run on them using a Dockerfile.
Very annoying that hosting providers have decided that “container” means the same thing as “VM”, when its much more about running a process in a restricted environment, leading to people having to optimize for this.
Granted, Docker also makes “just use one container for a process” a whole thing. I should be able to define, in a single Dockerfile, a whole process tree (I believe it’s at least possible to get multiple docker images out of a Dockerfile now, at least). Docker compose being its whole separate thing instead of a core part of it makes the whole thing oriented towards making life more legible for infra providers, rather than being a nice layer over “cgroups etc”.
I presume you mean multiple processes in the same container, not a pod? A pod running multiple containers, and thus processes, seems to be perfectly normal.
I was wondering why the author was putting multiple processes in one container as well. I respect “reasons” as a valid reason, and it was a cool article, but the precise “why” would also be nice.
A recent example: I’ve been making a Dockerfile for MTProxy, which is Telegram’s official implementation for a Telegram Proxy. This proxy requires to restart often with fresh configuration downloaded from Telegram’s servers, and this is not done automatically.
This could probably be done by patching the C implementation and do it in-process directly, but the appeal of the Dockerfile is that it doesn’t change anything from the official implementation, and just compiles it and uses it. This way analyzing if I’ve tampered with the proxy is way easier. Just doing a git diff from upstream shows that the only changes are the Dockerfile and the script that manages the restarts and configuration updates.
There are many more examples, so, “reasons” is a good summary :)
https-portal is a good example: it’s a HTTPS proxy, using s6-overlay (mentioned in the article as an alternative solution) for running nginx and cron to automatically renew certificates.
s6-overlay itself explains the motivation for this some more.
I used and loved KDE in the 3.x days for its sheer power and flexiblity. At version 4, they decided to abandon 3.x and spend several years rewriting KDE more or less from scratch. This forced me over to MATE (the GNOME 2 fork) and other GTK-based DEs for a good long while but I would still give KDE an honest try once every year or two. And once every year or two, I would be disappointed by missing functionality or stow-stopping bugs.
In the last few releases of the 5.x, I am happy to report that things have improved dramatically. I’ve been running whatever shipped with Debian 12 since before it was officially released and it’s been nothing but solid and a joy to use. Multi-monitor setups, bluetooth device management, weird networking configs, it’s happy to do it all.
I particularly love that KDE has learned its lesson about trying to reinvent the basic desktop computing paradigm. Instead, they doubled-down on it and have (finally) embraced iterative improvements. EVERY other Linux DE that has gone down the path of reimagining how people are going use their computer has either failed, or ended up with a DE that’s so simple only a child can use it.
If any KDE devs are reading this, know that your core user base is with KDE precisely because you have not given in to the flat, button-less, whitespace-everywhere, mobile-all-the-things mentality that is so very trendy these days.
Every time I’m trying KDE, I’m hitting showstoper bugs, usually random crashes or weird hangs of the UI requiring restarting desktop session. It doesn’t really matter how polished and featureful the thing is, if it isn’t usable.
This sounds a bit to me like graphics driver problems. I had all of that and more with an Nvidia card (different but equally critical issues on both X and Wayland), but with an AMD one for the last ~6 months I’ve not had one glitch.
edit: plasma6 on NixOS in both cases; Wayland working fine for me with AMD so haven’t had to try X.
I haven’t hit any major crashes, but every time I’ve used it I could rack up a laundry list of papercut bugs in fifteen minutes of attempting to customize the desktop. In mid 2022 on openSUSE Tumbleweed I was able to get the taskbar to resize a few pixels just by opening a customization menu, get widgets stuck under other widgets, spawn secondary bars that couldn’t be moved, etc.
I find it really depends on the combination of distribution, x11 vs wayland, and kde version. I’ve had good luck with debian – an older version of kde (5.27) but quite stable. I tried plasma6 on both fedora and alpine and found it a bit buggy yet.
I’m looking forward to trying out cosmic desktop once it is stable.
One more thought on this, a lot of people say that the less config is more user friendly and… I think it is more support friendly. So I know a few people who barely do computers at all and one of them said he just wanted youtube on this thing so I put kubuntu on it, youtube works. I came back a couple months later and saw he had changed all kinds of stuff and thought it was cool. Wallpapers, sizes, colors, widgets.
He enjoyed fooling around with the config options. Took me a sec to get oriented though when i had to look at it again!
I have a family member of 80+ years who I migrated to KUbuntu before windows 7 ended. The downside is that she likes to configure her desktop into oblivion. It isn’t the first time I simply delete everything and reset it after a year, so I get back a working home menu.
But overall she is happy and I don’t have to worry about random .exe and .xlsx files. In hindsight I could have bought her a macbook instead. But there was no m1 back then and the UI is far too different from windows. And I think it wouldn’t have survived for very long. Especially since incremental (and locked down) backups are keeping my sanity at least once per year.
That reminds me… I have children and one of the benefits of living in my household is that children get free computers! But the catch is that they have to run Linux because I don’t know how to effectively manage Windows and Macs are too expensive to risk it. My daughter is in high school and is comfortable with technology but is not what you’d call “a computer person.” Even so, she’s been running KDE on Debian for 1.5 years with practically zero assistance from me.
I have one sister who got used to linux because that’s what I gave her. She has now macos, windows and linux available. But apparently she just feels more comfortable with linux.
The Mac OS X-like dock at the bottom of the screen has been my preferred way of opening and managing running applications since I first got an iBook G4 more than 20 years ago, and to this day I find it far superior to any alternatives.
a few paragraphs later
Another feature coming straight from my days using Mac OS X is KDE’s equivalent of Exposé, called Overview, without which I wouldn’t know how to find a window if my life depended on it.
This is what taskbars are for!
In any case, I’m not a KDE user anymore (I made all my own stuff nowdays with the old blackbox window manager as a base), but I feel much the same way as this author about the alternatives. When you have your own way of doing things, the others keep wanting you to do it their way…. and the worst is that their way changes almost randomly on you. Some update comes down and things change and you’re just kinda stuck with it. Freedom from that is very appealing and the main reason why I use linux at all.
I feel exactly the same way. It’s one thing for a developer to say, “I designed this software to work the best I know how, contributions or forks are welcome,” and quite another to say, “I’ve designed the objectively best UI and everyone else is wrong.” The latter bit of snobbery unfortunately seems to be the trend these days.
KDE stands out among desktop environments for being willing to meet users where they are instead of demanding that users adapt themselves to a specific arbitrary workflow.
https://biodigitaljazz.net is my attempt at a pure hacker site. I.e. I have absolutely no goals for this site other than to create interesting things and blog about things that are interesting to my hacker brain.
I’m in the “knows enough to be dangerous” camp of programming: it’s not my day job, but I do enough of it as a hobby to get an intuitive sense for what’s good and what’s bad. Interesting that I managed to reach largely the same conclusions on these topics as someone who has been doing it far more professionally than me!
If Oracle do defend it and win, it could be an opportunity to move to a better name than JavaScript.
The same opportunity exists regardless of what Oracle does. The problem is that there isn’t a better name than JavaScript, because that is what everyone has called it for the last quarter century.
Of course we already have another name, although I don’t think many would call it better. It is kind of yecky.
At least ECMAScript is a little better than HATEOAS! /shrug
A bit tongue in cheek, but we’ve already got another name: “It’s just Scheme with curly brace syntax” — Douglas Crockford
I feel like there is a FAR better chance of stopping the use of “SSL” in favor of “TLS.” We’ve had TLS for 25 years and all versions of SSL have been formally deprecated for 10. Other than sheer habit, there is no reason for people to keep saying SSL when they mean TLS, other than in the names of some software and libraries that pre-dated TLS.
And yet, it likely will never happen. When we look at JavaScript, the situation is thornier by an order of magnitude or more. It is a MUCH deeper and entrenched name. As any middle-schooler will attest to, names tend to stick, even when we don’t want them to.
This is kind of beside the point, but I do not understand why the IETF went through a phase of renaming protocols that they adopted, eg, SSL -> TLS, Jabber -> XMPP, BEEP -> BXXP (probably others I forget). At least they seem to have renamed things less in recent years.
SSL to TLS was for the sake of MS’s ego, apparently: https://tim.dierks.org/2014/05/security-standards-and-name-changes-in.html?m=1
Jabber to XMPP was a trademark thing if I recall correctly. (EDIT: apparently I did, Cisco owns the trademark but sublicenses it to the foundation: https://xmpp.org/about/xsf/jabber-trademark/background/)
Don’t have any knowledge on BEEP
My son’s school has a greenhouse and he wanted to be able to monitor the outside and inside temperatures. So we are currently working on an ESP8266 with two themometers connected which will talk to a Rasberry Pi running Home Assistant OS. I remember dabbling with this stuff years back but thanks to the hard-working hackers behind this stuff, now everything is just dead simple and it all Just Works. If I didn’t have a thousand other projects in the queue already, I would love to go whole-hog on home automation. (It’s something I’ve been dreaming about for decades.)
Honestly, I recommend not going whole-hog and instead dipping your toe in.
It’s the perfect hobby to have on the side over the long term and only dabble with here and there as time permits. My journey has basically been:
It’s taken me about a year and a half to get to step 7. I probably average ~15-30 minutes / week chipping away at it. As a parent with almost no free time, it’s been a great little hobby!
Note that you must have a Broadcom customer account in order to download this. There’s no direct download link that I could find.
The macOS/Linux download URLs can be found here:)
https://github.com/Homebrew/homebrew-cask/blob/5ed183027f71e684e73538995160bc724ab42ff4/Casks/v/vmware-fusion.rb#L5 https://aur.archlinux.org/cgit/aur.git/tree/PKGBUILD?h=vmware-workstation&id=140ce4d9cbffc2044cdf2c1008f5d8f4987fd613#n63
Windows? 🤷
Does anyone have any insight into this, & what latest is going on at VMWare after the acquisition? Is this a good thing for Fusion/Workstation users, or just the first step on a long road of gradual decay toward an eventual unsupported dusty death?
My suspicion is that Broadcom is winding down these products but have to maintain them for at least a few years yet due to existing support contracts. Someone in middle management thought it would improve the company’s image a little bit to release these for “free” while they are still supported.
(The company I work for has a lot of vSphere, which was already eye-wateringly expensive before VMware was purchased by private equity. Earlier this year when it came time to renew the support contracts, they literally tripled the price. Our company said, “no thanks,” and we are now running thousands of vSphere hosts and a bunch of vCenters with zero support while whole teams scramble to transition our services to a mix of OpenStack and Kubernetes.
“May you live in interesting times.”
I don’t know. Could be @icefox’s Tenth Law.
Oooh, what’s that?
https://lobste.rs/s/u3t4sg/xmpp_forgotten_gem_instant_messaging#c_rawvsq has it as
“@icefox’s Tenth Law: Never attribute to anything else what can be explained by embrace-extend-extinguish.”
https://lobste.rs/s/u3t4sg/xmpp_forgotten_gem_instant_messaging#c_rawvsq
It’s something I named in this other thread:
https://lobste.rs/s/4ll6vo/vmware_fusion_workstation_now_free_for#c_zfwmi4
(I may have been wrong about it that time.)
I’m not a heavy user of either product but Broadcom previously added some kind of non-commercial license for both that was useful for me for playing with retro operating systems and checking/improving various FreeBSD emulated device drivers. From the casual perspective it seems like they are still working on both and it doesn’t seem like either were the main focus of VMWare before the acquisition so no perceptible change in terms of quality (which is merely acceptable).
It’s a little crazy this didn’t happen a long time ago, under VMWare, to try and keep some level of relevance to the underlying hypervisor and device model. People seem to think Broadcom is the only greedy company but VMWare was always a very greedy company.
I really don’t like the terminology “soft link” and “hard link” because it obfuscates what is going on and that ends up becoming a source of confusion, as the author demonstrated by the need to write the article.
A “symbolic link” is a file that is just a reference to a different filename. It’s no more complicated than that. A “hard link” doesn’t actually exist as a thing, it’s just what we happen to call the situation where two different filenames point to the same inode. Once you know these two things, it’s easy to reason about when you can (or should) use one or the other.
For extra credit, learn about reflinks which are similar to “hard links” except that modifying one of the filenames creates a new copy of the file instead of modifying the data referenced by the inode. (Sadly they are still not well supported by many applications that would benefit from their use, such as rsync.)
The 7th Edition manual (which was before symbolic links were a thing) says “A link is a directory entry referring to a file; the same file […] may have several links to it.”
When symbolic links were introduced, the man pages for ln(1) and link(2) were changed to describe non-symbolic links as “hard links”. The “hard” term sort of makes sense because the link can’t be broken without being removed entirely, unlike a symbolic link.
It’s reasonable to dislike the terminology, but “hard” is the standard terminology and I don’t think there are any alternatives.
I tried Alpine out on some of my servers, and still run it on some of them, but the lack of unattended upgrades limits where I use it. No unattended upgrades is fine if you have other ways of automatically handling security updates, but at home I’m not always going to manually apply them.
Super great OS though!
I’m not super familiar with Alpine, but wouldn’t unattended upgrades simply be a matter of a cron job that runs a shell script that checks for package updates, installs them, and then reboots the host?
Sure but when you’re new to an OS there are details that are hard to get right there. How do you disable input prompts for the packager? How do you only reboot for security updates? Can you just schedule the reboot instead of doing it automatically? What happens if you have an error in the script and it’s not actually updating ever?
There’s a reason it’s built-in to other operating systems.
I use APK upgrade minus minus available. There’s all kinds of automatic ways to do it. I just have a cron job that reboots the servers and does it every night.
I have a few questions for those who have been experimenting with self-hosting their own LLMs.
To set the context (hurr): I am someone who uses LLMs a few times a day. I bounce around between chatgpt.com and ddg.co/chat depending on my mood. I generally use an LLM as a substitute for Google (et al) because web search engines have become borderline useless over the last decade or so due to the natural incentives of an ad-based business model. I find that the LLMs are correct often enough to offset the amount of time I spend chasing a non-existent made-up rabbit hole. I treat them like Wikipedia: good as a starting point, but fatal as a primary source.
But I still don’t know much about a lot of the concept and terms used in the article. I know that the bigger a model is, the “better” it is. But I don’t know what’s actually inside a model. I only sort of get the concept of context but have no idea what quantization means outside of the common definition. This is not meant as a critique of the article, just to state my level of knowledge with regard to AI technology. (Very little!)
That said, hypothetically let’s say that the most powerful machine I have on-hand is a four year-old laptop with 6 CPU cores (12 hyperthreads) and 64 GB of RAM and no discrete GPU. It already runs Linux. Is there a way I can just download and run one of these self-hosted LLMs on-demand via docker or inside a VM? If so, which one and where do I get it? And would it be a reasonable substitute for any of the free LLMs that I currently use in a private window without a login? Will it work okay to generate boilerplate or template code for programming/HTML/YAML, or do you need a different model for those?
I have heard that running an LLM on a CPU means the answers take longer to write themselves out. Which is okay, up to a point… waiting up to about a minute or two for a likely correct and useful answer would be workable but anything longer than that would be useless as I will just get impatient and jump to ddg.co/chat.
One way to think of a model is that it’s effectively a big pile of huge floating point matrices (“layers”), and when you run a prompt your are running a HUGE set of matrices multiplication operations - that’s why GPUs are useful, they’re really fast at running that kind of thing in parallel.
A simplified way to think about quantization is that it’s about dropping the number of decimals in those floating point numbers - it turns out you can still get useful results even if you drop their size quite a bit.
I suggest trying out a model using a llamafile - it’s a deviously clever trick where you download a multi-GB binary file and treat it as an executable - it bundles the model and the software needed to run it (as a web server) and, weirdly, that same binary can run on Windows and Mac and Linux.
I wrote more about these when they first came out last year: https://simonwillison.net/2023/Nov/29/llamafile/
I’d suggest trying one of the llama 3.1 ones from https://huggingface.co/Mozilla/Meta-Llama-3.1-8B-llamafile/tree/main - should work fine on CPU.
I’ve used a few projects to run local models. Both of them work on my Ryzen CPU and on my Radeon GPU:
With ollama, there are a few web UIs similar to ChatGPT, but you can also pipe text to it from the CLI. Ollama integrates with editors such as Zed, so you can use a local model for your coding tasks.
I haven’t been a regular user on IRC in about 2 decades. What are some channels worth hanging out in these days?
I’d say most public channels on OFTC and Libera are worth checking out if the channel name lines up with stuff you’re into. Some channels aren’t as active as they used to be back in the old freenode days, but the ones with like 200+ users are usually still pretty active.
Ugh more advocation of non-root SSH (why?) and Fail2Ban (ewwwwwwwww)
Yes, disabling password SSH is a good idea. Yes, firewalls can be a healthy part of balanced security breakfast, altho I wouldn’t recommend them until you get to the scale of hiring employees, having security compliance standards, that kinda thing.
But seriously, I think the risk of giving up on ever using linux because the permissions are too frustrating is way more significant for new users than the risk associated with driving around as root all the time.
Also, I’d like to echo the sentiment that SSH’s entire security model is predicated on the ASSUMPTION that the user has verified the servers SSH host public keys ahead of time (i.e., its up to you whether “trust on first use” means “verify on first use” or “hope on first use”).
This is definitely a bit of a tinfoil hat thing, but IMO its worth pointing out in an article like this.
If there’s a backdoor/vulnerability in OpenSSH, the attacker only gains local user access, not immediately root.
If TOFU bothers you so much, that’s what SSH certificates are for. :)
why no non-root ssh? Attack surface, a guessable login name.
OK but do you really think that anyone is going to make their username via
hexdump
-ing/dev/urandom
?And SSH should be sealed off against credential stuffing anyways, password SSH should not be used.
Non-root SSH is an issue in itself because it significantly complicates the usability of the CLI. And considering that this article seems to be aimed at new-ish Unixy CLI users, usability challenges are a much bigger security risk !!
I.e, with non-root login, the user is more likely to fail to edit /etc/sshd/config and never be able to disable password ssh login because they can’t navigate the permissions issues. Having their first name or screen name as their username is probably not much of a credential stuffing defense compared to disabling password SSHin the first place.
Basically what I’m saying is, this whole article should really just be about disabling password SSH, complaining that cloud providers don’t always provide the SSH host public keys and hashes, and about enabling symmetric encryption on your SSH key on your client machine.
I’ve never heard anyone describe the permissions model of using sudo to perform administrative tasks complicated.
If someone is struggling with that, the advice is to learn about it not to disable it. It’s useful for auditing your own activity if you have to consciously type sudo to perform certain tasks.
There’s people who also seem to advise just running sudo for everything, but if you don’t understand why you’re using sudo then you should be pausing to think and asking someone.
Why should we normalise bad practices because people are struggling to bother to learn the good practices?
For the same reason we normalized using a GUI: It’s easier. It takes less time to learn, and it takes less time to do stuff once you have learned how to do it.
I’m not even convinced its a bad practice. It depends. If all I do is log in, edit
docker-compose.yml
or some systemd service unit file and then restart a service, why does it matter whether I’m logging in as root or not?I can see that, but when we think about “security “ as managing risk, for a personal server IMO the primary risk comes from the admin themselves. I think security and usability are two sides of the same coin there. A system which is easier to use is will be less risky because the user will have more accurate & relevant information about what’s going on. They will be less likely to be deceived or overwhelmed with information that’s not relevant to what they’re trying to achieve. They’ll probably be more confident and in a better mood, too.
Mode switches in CLIs are notoriously hard for new CLI users to cope with (How to exit vim?? How to exit the pager??).
sudo
is another mode switch, and it takes mental effort (and critically, experience) to fully understand it and not be surprised by aspects of it. (Why can I run program x as my user but when I try to run it undersudo
, it doesn’t work or can’t be found? I used sudo -i, now why is half of my shell history missing???)So mostly what I’m saying is, at least for my imagined use-case of installing server apps on a VPS and then using that software over the network, (rather than logging into the server and using its CLI environment to mess around and do development) The place where users and permissions matter is in how the apps run, not how the user logs in.
So if we can get a massive usability win from logging in as root (removing an entire mode, PLUS mostly removing permissions issues) its definitely worth the marginally increased risk of accidental
rm -rf /
.For most tasks I don’t think there is a GUI which is more time efficient than a terminal.
If that’s all you are using a system to do, get someone else to manage things an expose a docker-compose.yml file upload and a “restart service” button.
The solution to letting people do things on Linux when they don’t want to learn Linux is not to teach them how to disable security features. It’s to present them with limited tools which enable them to do what they need to do.
If someone gets in a bad mood because they have to run sudo to maintain their server then they have no business managing a server at that low a level. Maybe give them something like cpanel but more modern.
Maybe they are difficult, I can’t remember struggling with them, regardless, it’s a one-time learning cost. You’re advocating in favour of permanent security weaknesses to mitigate a one-off learning obstacle.
Every time I’ve seen someone actually use a machine as root, they end up messing up permissions in various places and weakening things such that for example a remote code execution as a lower privileged user could be abused to escalate to root. This is a real world security weakness which can be easily introduced by someone logged in as root without a real understanding of the permission model.
Moreover, every time I’ve seen someone use a machine as root without the necessary awareness of why this is generally considered a bad idea, they’ve never really learned the permission model. They still reached for “sudo” to run any failing command. It’s not a good idea to cater for people like that, because if they’re unwilling to learn then making it so that they won’t have to learn won’t make things better.
Some gates are best kept locked if you’re too impatient to read the instructions on where to find the key.
sudo is a bad practice https://lobste.rs/s/ldkfdg/sudon_t
Bad practice implies some kind of consensus. One guy’s blog post isn’t equivalent to consensus.
Yes, sudo is over-complicated, that’s why I personally don’t use it in favour of doas (which the author misrepresents as something which “still implements most of a sudo-style rules language” which is so far from reality that I have to assume the author doesn’t actually know just how horrific the sudo rule language is).
But bad practice? No, definitely not. In your informal environment sudo is probably configured poorly and the auditing capability it offers isn’t actually fed into anything which could analyse it. But in any non-amateur setup it’s common to actually use the auditing and the permissions rules of tools such as doas and sudo to implement proper access control.
it doesn’t take /dev/random to be harder to guess than ‘root’.
If you think the article should be different, write it and show it.
I don’t understand what disallowing logging in as root is supposed to achieve.
What’s the real difference between cracking into a sudo-enabled account vs root? If you have SSH password login enabled, you’re screwed either way, because if you know the password, you can log in and use
sudo
. This blog post rightly suggests disabling password auth, so that’s not a threat in our case.So if you disable logging in as root, you will now need both an SSH key and the password to conduct administrative actions.
But is cracking an SSH key even a practical threat to worry about in the first place? (Assuming your private key is encrypted with a passphrase.)
It’s a low hanging fruit to reduce attack surface a bit.
And having key and password (both your own I hope, to sudo) seems natural to me.
root
is available mostly everywhere, what your own user is called less so. A little security by obscurity.Also said this in another comment, but if there’s a OpenSSH vuln (or backdoor like xz) the attacker doesn’t instantly get root.
ssh user
followed bysu root
can also be seen as 2FA for root.Assuming the vulnerability/backdoor allows only limited authentication bypass (e.g. it didn’t help at all against the xz backdoor afaik)
There are a couple of reasons:
Are you thinking of a more “enterprise” context than an individual’s “$5 VPS … with a budget VPS provider”?
Today’s hobbyist can be tomorrow’s sysadmin.
Well maybe the key is not encrypted with a passphrase, or maybe it gets leaked and the passphrase guessed, or maybe you misconfigured something and accidentally allowed password logins so the root password can be brute-forced, or anything else.
There’s no disadvantage to disabling root logins in the majority of circumstances - most times I’ve wished for it have been oops moments where I’ve screwed something up and need to rescue, and there have always been better alternatives such as using the provider’s console.
So it’s defence in depth.
I believe with me this is mostly habit.
I already want that separate user (so that I don’t make a mess by accident). Then since I’m already there, I’ll just also flip root to no, so that a script-kiddy doesn’t get lucky by accident.
I know the author touches on this, but for the sake of my own soapbox: “One process per container” was never an actual rule, just dogma perpetuated by people with little real-world devops experience. Yes, MOST containers need only run one process (an application server, typically) but occasionally you need to run something where it is either impossible or needlessly fragile to break it up into multiple containers. Especially if your application might spawn its own processes and does not contain code to do typical PID 1 things, then it should be managed by something that can, like tini, or this shell script.
The actual rule is, “a container should do only one thing.” This is entirely analogous to modularity in code, where a function (ideally) only does one thing but occasionally it makes more sense to just make it do two when the alternative is torturous design.
I think 20 years ago the Linux Desktop worked better than today. KDE 3 was really nice. Audio still worked. Wifi wasn’t as much of a must have yet. There were some companies porting games to Linux. Distros weren’t constantly tracking and trying to monetize you. There was no in-fighting regarding init systems. You didn’t have that mess of package manger + snap + flatpak. Third party stuff usually compiled per default with
./configure && make && make install
. Even Skype just works. The community was a lot less made up of self-promoters. Instead you got quick competent help, just like as a Windows user at the time.People’s main complaint was that there wasn’t any good video editing and graphics software. Wine worked okay-ish. (for some stuff one used commercial offerings)
The only thing that was a bit messy depending on the distribution were NVIDIA drivers. Maybe Flash Videos, but I think they actually worked.
It even was so good that without much technical background or even English skills one could not just use Linux, but get an old computer, install NetBSD and happily use it on the desktop.
I think the average today (let’s say Ubuntu and Manjaro for example) is that you’ll spend months to glue something up that you can maybe live with. I think parts of the Linux desktop is using design principles that are technically nice, but give users a hard time. An example is that creating something like a short cut used to be easy in desktop environments. Today there is a standard for applications, which is nice, but bugs the user that just wants to create a shortcut. I am not sure what happened to GUIs for creating .desktop files?
I don’t know if it’s fair to say that it was better. Lots of modern problems have their back-in-the-day equivalents. You didn’t have to fight with package manager + snap + flatpak but pre-yum RPM was a pain. Lots of third-party stuff was compiled with
imake
and haha good luck running it on anything except Red Hat Linux 9.0 or whatever. Skype just worked insofar as the microphone worked, which wasn’t always a given.What is disappointing and, I think, fair to say, is that we don’t have twenty years’ worth of bugfixes and improvements in today’s desktop stack. For example, there are very few components in Plasma 6 today that haven’t been rewritten since KDE 3, to the point where Plasma 6 is a bit of a misnomer – much of it is actually on its second major release, and it shows. We are, at best, about 10-15 years into the lifetime of current desktop technologies which is why, adjusting for the significantly increased complexity, in terms of stability and capabilities, we’re not much further than where we were in 2007 or so.
I think this ritual burning of all existing technology (wasn’t the first time it happened, either; Gnome 2 and KDE 3 both significantly departed from their predecessors) came at a particularly bad time, because it roughly coincided with the period in which lots of people lost interest in desktop development.
20-25 years ago cool kids dreamed of writing a better window manager, or file manager, or browser, because that’s what was hot. 10-15 years ago cool kids were writing phone apps; if they used Linux, they wrote web apps. So there weren’t as many fresh ideas (and fresh heads) going into desktop development. That made desktop development both slower, as platform complexity grew in every aspect, from font rendering to hardware-accelerated drawing, and much quicker than spare development time, and more divisive.
You’ve put this so well, thank you
It was. But that’s why I simply didn’t use RPM based systems. Never understood why people like to go through that pain.
Huh? What software?
As mentioned, never had a problem with that ever. I mean it. I met my girlfriend online during that time. Skype calls even very long ones never had a problem and she was the first person I’d use Skype with. Since that’s how I got to know my girlfriend I know that time vividly. Meanwhile I constantly run into oddities with Discord, Slack and Teams, if they even work at all. Again, not even using Bluetooth. Just an audio jack, so the setup should be simple.
Not intending to burn existing technology. I have no reason to. Just stating that things that used to work don’t work anymore. Can have lots of (potentially good) reasons. I just also think that the idea that software magically is better today somehow is wrong. And a lot of claims “you just remember wrong” are very very wrong. Same thing with games by the way. I put that to the test. Claims like “You are just older and as a child things are more exciting” are simply wrong in my instances. Just like going back to old utilities. I have no interest of putting new software down in any way. I hate that old vs new software farce. It’s simply whether stuff works or doesn’t.
I’d argue that there is currently not much going on on the Linux desktop side. There are good reasons. People don’t use desktops as much anymore. And they aren’t their main focus. People have phones, apps, smart tvs, etc. Lots of people that would have run Linux back in the day now run macOS. It is a known fact that less people work on desktop environments and when stuff gets more complex, one needs to support a lot more things. On top of that all the development moved into the browser over the last two decades. People don’t really create desktop applications and by extension desktop environments anymore.
So of course if developer focus shifts other stuff will be better in the open source world. Non-tech people can run LLMs on their home PCs when they invest 15 minutes. People share their video collection. There is open source social media that is actually used. Graphics software is a ton better. With Godot there is a really great game engine. People create programming languages. LLVM is amazing. There are finally hobby OSs (Serenty, etc.) again. All of these are really great.
Just that my personal desktop experience was better back then and I think a really big reason for that is that the focus of a desktop was more narrow.
You can always not use Snap or Flatpak today. Doesn’t mean no one does, just like it didn’t mean no one used RPM back in the day, too. I don’t use either and I’m definitely happier with my packaging experience than back in 2004-2006-ish (which I’m guessing is the period you mainly have in mind, based on Skype?). (Edit:) I didn’t use RPM-based distros back then, either, so it’s not because of yum :-).
The ones I remember most vividly are Maya, Matlab, and… pretty much any GIS software at the time. Same as above – maybe you didn’t use them, doesn’t mean no one needed them. Any desktop will work flawlessly if you only pick applications that happen to work flawlessly :-).
That makes sense. You are right, but you brought up RPM, that’s why my response was about how I never understood it. :-)
Probably I got lucky with software then. I wasn’t use any of this. I got into GIS a bit later, so looks like I just avoided that or package mangers abstracted it away for me.
I think x64k was referring to the desktop environment developers as “burning … existing technology” by rewriting their components rather than polishing the old, working ones that you liked.
The interesting thing is that the macOS has still been using the same core tech for the desktop environment during that time period (Quartz, Cocoa, etc.), though recently things like SwiftUI were introduced. I wonder if Apple just got it almost right the first time because they had more experience than the KDE/GNOME folks or whether open source desktops are more affected by every generation wanting to leave their mark/brush up their resumés.
Though I think that an important difference between KDE and GNOME is that KDE development is also more driven by Qt. At some point the owner (I think it was still Troll Tech at the time) of Qt said: Qt Widgets are now legacy and won’t see new development, and KDE had to rebase on QML/Quick, resulting in Plasma.
It’s a combination of factors, really. Nautilus, for example, was literally written by ex-Apple people, and had many similarities both to file managers of that era and to Finder in particular. So there was certainly no shortage of people to get it right the first time.
IMHO a bigger factor, especially in the last 10-15 years, was the FOSS desktop development is a lot more open-loop. People come up with designs and they just run with them, which is a lot easier to do when you don’t have to worry about supporting multiple install bases, paying customers ditching you and so on.
That’s very useful in many ways but one unfortunate side-effect is that, for all their combative attitude towards closed platforms, major FOSS desktop projects relentlessly chase every fad in closed platforms, too, including the ones that just don’t make sense, or in interpretations that just don’t make sense, like app stores (and I’m not referring to Flathub here; ironically, I think that’s the only platform of this sort that actually makes some sense, at least there’s a special packaging and distribution technology behind it; I’m thinking more of things like the Ubuntu Software Center). App store-like platforms could fulfill a very useful social role, serving e.g. as platforms for donations, bug bounties, feedback etc. – but instead they act just like their closed-source counterparts, with nothing but votes and ratings, to the point where they’re just Synaptic with votes and weird package management bugs.
~20 years ago (my memories might be slightly off), desktop Linux had recently gained the ability to automatically write your X11 modelines, but I still had to manually write my touchpad config. For audio, ALSA was the thing, but also ARTS was a thing, and also ESD was a thing, and OSS was still a thing, and audio worked if you only had one program or sound server having exclusive control of your sound device because anything else was a slow descent into madness.
My goggles are absolutely not rose-tinted, and I’ll recommend a current-day bog standard Ubuntu install any day of the week because of the sheer size of the user base and the amount of info/software targetted towards that ecosystem. And if you are tired of Canonical going their own way every other year, Debian is a fine replacement that mostly works the same way.
Oh, yeah, that was a major inflection point in the community as well, as after that point there was no point in running
xf86config
xorgconfig
so it suddenly became impossible to shame the noobs who used that instead of editingXF86Config
by hand like Real Men.ALSA gained softmixing (and wide hardware support) kind of late, which is what made sound servers useful for a while – and also what led to the following truly hilarious, and quite puzzling bug for newbies.
If ARTS couldn’t claim exclusive control over the soundcard – because, say, XMMS had exclusive control over it through its ALSA output plugin – it didn’t play anything, but it did continue to buffer whatever you sent to it, and would begin to play it as soon as it could claim control over the soundcard. I learned that when I began using kopete (which, for a while, had the best Yahoo! Messenger support) on a fresh install. I hadn’t changed XMMS’ output plugin to ARTS, so none of the “Ping!“s made it to the sound card…
…until I stopped XMMS, at which point ARTS faithfully played every single Kopete alert it had received in the last hour or so (or however long its ring buffer was).
This was actually worse than it sounds – in this case it was just a particular configuration quirk, but the real problem was that not all software supported all sound servers, or at least not very well. E.g. gAIM (which later became Pidgin) supported ARTS but it was a little crashy, and in any case, ARTS support became mainstream among non-KDE software relatively late in the 3.x cycle. Even as late as 3.2, I think, it was just more or less a fact of life that you had one or two applications (especially games) where you kind of lived with the fact that they’d take over your soundcard and silence everything else for a while. Truly a golden age of desktop software :-).
Some things the younger generation of Linux users may not remember:
dd
orcat
from/dev/cdrom
to a file) and mount all of them, but not everyone had that kind of space.Actually, I loved that behavior and would use it deliberately to queue things. I resisted moving to ALSA for quite some time because I liked how things blocked. Oh well.
(I started with Linux in 2004 btw, not before. It was solidly ok, I think I dodged much of the pain people talk about/)
Well, sure, it was fun if it was the IM client that blocked. Not as fun when it was an actually important alert, or when the browser queued some earbleed noise from a Flash intro, or when you couldn’t listen to music or watch a movie until you quit the offending app.
ALSA softmixing STILL doesn’t work. I tried running a server-less setup a few years ago. Applications just took exclusive control anyway.
It works just fine, I still use it today. You might need to configure it though; distro configs based on PulseAudio tend not to enable it since they assume PA is doing it anyway.
/etc/asound.conf will define things based on
dmix
(for playback) anddsnoop
(for recording) if alsa mixing is enabled. and the pcm.default will refer to that pseudodevice instead of the hardware.I haven’t tried it in a long time (basically since
apulse
just didn’t cut it anymore :-) ) but back then, though late, it worked just fine.It’s been a while so I don’t remember the details, but I think what happened was that apulse wasn’t really working, so I used the built in ALSA support in Firefox (code is still there last I checked, just disabled in the default build config) which took exclusive control.
I don’t know how that’s handled post-PulseAudio or how well it works. But I am 100% sure it worked. The only reason I stopped using it was that PulseAudio became a dependency pretty much everywhere so yanking it out and dealing with the fallout was about as much trouble as using it.
I find this to be just not true. While Linux today is undoubtedly comprised of more complex subsystems like pipewire and systemd, it allows you to effortlessly use bluetooth headsets, play hiend games with comparable performance and even (and this was unthinkable back then) do music production with incredibly full featured software. Maybe the simplicity of yore was enjoyable but linux today is a lot more capable.
I think you have a pretty skewed picture and I’m happy it worked that well for you back then.
I certainly had linux on the desktop in the late 90s, but it just wasn’t great. Our shared computer pool at university (I started in 2003) worked perfectly fine, but it was curated hardware and some people put in some real effort for it.
I bought my first laptop in 2004, a friend had the predecessor model, that’s why I knew it was relatively ok for Linux… and yet I moved to FreeBSD because of a couple things (one of them wifi) it just wasn’t great if you wanted to have it “just work”[tm].
Compare to today, people were kinda surprised when I said that I have no sound on my desktop, although games run at 120 FPS out of the box with the 3070. Turns out it’s a commonly known problem with this exact mainboard chipset, and plugging in the only USB sound card I ever owned.. it just works. All I’m saying is that I have not had proper problems for about 10 years (which weren’t solved easily) - but earlier than like 15 years ago everything took a lot of time to get running smoothly… that’s my experience.
That’s the whole point. Linux on the Desktop was great once it was configured.
Getting it configured was the hard part. We even had “install parties” because most people could just not do that configuration themselves.
FWIW, I agree much more with your original post than with the comment I replied to.
I guess my main point is that while I’m not averse to configuring stuff, I’ve always held the view that you should be able to do it in a reasonable time with a modest amount of knowledge. And very often the drivers simply weren’t there, so without switching hardware you were just out of luck, and it was not rare.
And what makes you think that? Sounds a bit like an idle claim. ;)
20 years ago was 2004, not the 90s. I used Linux as my main OS back then. Was shortly before I had a long run of NetBSD. Double checked some old emails, messages.
Mine was on a system that was by a local hardware store owned brand. That NetBSD was installed on a computer from a government organization sale.
I do. Today audio burns through CPU cycles, is wonky, on some system out of the box has buffer underruns, randomly kills YouTube, comes out of my speakers rather than my (audio jacked, so not even bluetooth) when I reboot. I never used to have any audio problems back then. Not with games, not with Skype.
Meanwhile my games (and I played a lot of games back then being young) need stuff like gamemoderun to be playable. Back then Enemy Territory (with so many mods), Majesty, Neverwinter Nights, etc. as well worked out of the box.
Of course it’s my experience, and my view was shared by about basically everyone I know who ran Linux on the desktop back then. I didn’t say it was terrible or your overall point is wrong. I just don’t believe that many people thought that everything was just fine and worked by default for the majority.
Maybe I’m focusing too much on getting stuff to run at all (which sucks if anyone changed anything in the kernel or in general upstream), and you’re focusing too much on problems today. It’s never perfect ;)
Now that KDE is stable again, I think we’re just about back to where we were 20 years ago. Only it’s Zoom instead of Skype. And nVidia drivers are still buggy. :)
The more things change….
It is?
On Debian, at any rate!
How many fixes have they pushed out to 6 so far?
https://en.wikipedia.org/wiki/KDE_Plasma_6#Releases
Plus 6.2.1 and now 6.2.2.
I think they are up to 14 or 15 in 10 months. They are heading for 50% over a monthly release cycle.
To misquote Douglas Adams: “this must be some strange new usage of the word ‘stable’ that I’m not familiar with.”
We just had it for package managers instead
Did we? And if so, did it stop?
I don’t think “instead” is the correct term, when we now have Nix vs Snap vs Flatpak vs Docker Image vs traditional package managers.
Don’t forget AppImage. :-)
And GNUstep
.app
bundles.And there’s 0install as well, but nobody uses that.
I agree a bit, but I feel like Linux back then was generally more work - if nothing else, it required a lot more understanding of how it worked.
Very few people back then were running Linux in a VM, and they certainly weren’t using WSL or a container - most people had to install it on real hardware, and there was usually a bit of learning curve just getting the system to boot into Linux for the first time and getting drivers setup.
I’m currently running Debian on an old MacBook Pro, and it reminds me a lot of using Linux 20 years ago. Everything’s working - I can video chat with Microsoft teams, I have accellerated 3D graphics (with NVidia until a few months ago), etc., but it was work to get it up and running. Proprietary drivers had to be tracked down, some special kernel modules had to be built from source, some magic incantations had to be added to the kernel command line, etc.
Nowadays when I have to do that, it’s a real inconvenience - 20 years ago it was just expected.
Audio works better today than it ever has on Linux.
There is no real actual in-fighting regarding init systems today. There are three groups: those who just use systemd, those who have reasons to not use systemd and aren’t weird about it, and losers who no one cares about.
Actually, since you mention both tracking and snap, how many of the problems you have are just Ubuntu-specific, not modern desktop Linux specific?
That makes sense. I think the reason is the same as for RPM was back then though. There is that stuff that big companies introduce and the smaller players kind of have to adapt, because being able to pay means you can outpace them. Some people aren’t happy about that. It might be why they switch away from Windows for example. While I think there is a lot of people that fight some ideological fight and systemd is the target, I’d argue that even the “losers” will give you a reason. Whether it’s a good reason or not is a different question of course.
Audio and Video are my primary ones. I am mainly on Arch, assuming I made mistakes only to find out that when I get a work/client laptop with Ubuntu, etc. it also has issues, even though I am not sure they are related. Different OS, different issues.
Most recent: Manjaro. Thinkpad T14s. Out of the box. My stuff moves to a different screen simply because my monitors goes to sleep. Sometimes playing videos on YouTube freezes the screen it’s playing on for a couple of minutes. Switching my audio output does work sometimes, sometimes not.
I have had the freezing on Ubuntu (which was the standard by the company) before. Instead of stuff moving to the other screen I had instances, where I had graphical artifacts. And instead of audio output not switching when explicitly selecting it I had issues with it not switching when I start the system with headphones already plugged in.
20 years ago I was able to get stuff done without such issues.
I also don’t have issues on other OSs, not on Windows, not on OpenBSD. I checked during debugging.
I am not the only one with those issues, however fixes don’t work. No specific errors in journalctl/dmesg. People have been reporting these issues of course. Some had other causes. Some changed window manager, some switched hardware, some switched between wayland/xorg (both ways actually), etc.
I have hopes that these will be fixed eventually, but the whole point of the above was that for my use cases 20 years ago the average Linux distribution of the time did a better job of what I expect. Of course the story might be different for other people, but I don’t think I need to mention that.
The more time went by, the more I realised that this state of mind was particularly toxic and ultimately disrespectful of the real needs of the people around us.
I’m using the word ‘we’ here because obviously, I also had this approach at the time (admittedly, a few years later, being a bit younger), but I’m a bit ashamed of the approach I had at the time and today I have a deep rejection for this way of behaving towards a public who often use IT tools for specific needs and who shouldn’t become dependent on a certain type of IT support that isn’t necessarily available elsewhere.
Who are we to put so much pressure on people to change almost their entire digital environment? Even more so at a time when tools were not as widely available online as they are today.
In short, I’m quite fascinated by those who* are proud to have done this at the time, and still are today, even in the name of ‘liberating’ (often in spite of themselves) users who don’t really understand the ins and outs of such a migration.
[*] To be clear, I’m not convinced, given the tone of the blogpost, that the author of this blog post does!
Can we please stop throwing around the word “toxic” for things are totally normal human interactions. Nobody is obliged to do free work for a product they neither bought themselves nor use nor like.
The “or never talk to me about your computer anymore” if you don’t run it the way I tell you to part, is, IMO, not normal or nice. I’m not sure I’d have called it toxic, but I’d have called it unpleasant and insensitive.
Of course nobody is obliged to do free work for a product they don’t purchase, use or like. That’s normal. But you can express sympathy to your friends and family who are struggling with a choice they made for reasons that seemed good or necessary to them, even if you don’t agree that it was a good choice. It’s normal listen to them talk about their challenges, etc., without needing to solve them yourself. You can even gently remind them that if they did things a different way, you could help but that you don’t understand their system choice well enough to help them meet their goals with it.
The problem is telling a friend or loved one not to talk to you about their struggles. Declining to work on a system you don’t purchase, use or like, is of course normal and not a problem.
I’ve used Linux on my own machines exclusively since 1999, and when I get asked to deal with computer problems (that aren’t related to hardware, networking or “basic computer literacy” skills) I can’t help with, I’ll usually say something along the lines of “you know, you actually probably know more about running a Windows machine than I do” - which doesn’t usually get interpreted as uncaring or insulting, and is also generally true.
If you buy a car that needs constant repairs you get rid of it and buy something else that does not require it. There is no need to sit with family/friends and discuss their emotianal journey working with a Windows computer. it is a thing. If it is broken have it repaired or buy something else.
You might. Or you might think that even though you’ve had to fix door closing sensor on that minivan’s automatic door 6 times now, no other style of vehicle meets your family’s current needs, and while those sensors are known to be problematic across the entire industry, there’s not a better move for you right now. And the automatic door closing function is useful to you the 75+% of the time that it works.
And you still might vent to your friend who’s a car guy about how annoying the low quality of the sensor is, or about the high cost of getting it replaced each time it fails.
Your friend telling you “don’t talk to me about this unless you suck it up and get a truck instead” would be insensitive, unpleasant and might even be considered by some to be toxic.
It’s not an emotional journey. You’re not asking your friend to fix it. You’re venting. A normal response from the friend would be “yeah, it’s no fun to deal with that.” Or “I’d know how to fix that on a truck, but I have no idea about a minivan.”
–
edit to add: For those who aren’t familiar with modern minivans, they have error prone sensors on the rear doors that are intended to prevent them from closing on small fingers. To close the doors when those fail, it’s a cumbersome process that involves disabling the automatic door function from the driver’s area with the car started, then getting out and closing the door manually. It’s a pain, and if your sensor fails and your family is such that you use the rear seats regularly, you’ll fix it if you value your sanity.
“Tired of being an unpaid Microsoft support technician…” - no, it’s not venting.
As a sometimes erstwhile unpaid support technician, I vehemently disagree.
I fully admit that sometimes I stepped into that unpaid support technician role when I could have totally, in a kind, socially acceptable way, said “Wow, it’s miserable that your computer broke. You should talk to {people you bought it from}. I can tell you a lot about computing in general, but they’ll know a lot more about Windows than I would.”
And it would’ve been OK, because the people telling me about their problems were mostly venting, not really looking for a solution from me.
But as a problem solver, I’m conditioned to think that someone telling me about an issue is looking for a solution from me. That’s not so; it’s my bias and orientation toward fixing this kind of thing that makes me think so.
Thank you, you’ve put into much better words what I wanted to say than the adjective ‘toxic’, which was the only one I had to hand when I wanted to describe all this.
How on hell could it be considered as toxic to refuse to support something which is against your values, which requires you a lot of work, which is unpaid while still offering to provide a solution to the initial problem ?
All the people I’ve converted to Linux were really happy for at least several years (because, of course, I was not migrating someone without a lot of explanations and without studying their real needs).
The only people who had problem afterward were people who had another “unpaid microsoft technician” doing stuff behind my back. I mean I had been called by a old lady because her Linux was not working as she expected only to find out that one of her grand-children had deleted the Linux partition and did a whole new Windows XP install without any explanation.
I think there are three aspects to this:
First of all it is obviously your choice whether you want to give support for a system you don’t enjoy and may not have as much experience with. Especially when you could expect the vendors of that system to help, instead of you.
But the second part is how you express this: You are - after all - the expert getting asked about providing support. And so your answer might lead them down a route where they choose linux, even though it is a far worse experience for the requirements of the person asking for help.
The last point comes from the second: You have to accept that installing linux is for those people not something they can support on their own. If they couldn’t fix their windows problems, installing linux will at best keep it to the same level. Realistically they now have n+1 problems. And now they are 100% reliant upon you - the single linux expert they actually know for their distribution. And if you’re not there, they are royally fucked with getting their damn printer running again. Or their nVidia GPU freezing the browser. Or Teams not working as well with their camera. In another context you could say you secured your job. And if only because updates on windows are at least 100% happening, which is just not true on linux.
I have seen people with a similar attitude installing rolling releases for other people while disabling updates over more than 6 months, because they didn’t have the time to care about all the regular breakage. And yes that includes the browser.
And the harsh truth is that for many people that printer driver, MS Office, Teams + Zoom and Camera is the reason they have this computer in the first place. So accepting their needs can include “Sorry I am not able to help with that” while also accepting that even mentioning linux to them is a bad idea.
If I had that attitude towards my wife I would end up very principled and very single.
(Context: she is blind and needs to use Windows for work. I also have to use Windows for work)
I also agree with your interpretation a lot more but I doubt the author would mean that quite so literally.
Why on earth should the author be doing free tech support for people on an OS that they didn’t enjoy using?
Because it’s a nice thing to do for your family and friends and they’ll likely reciprocate if you need help with something different. Half of the time when I get a “tech support” call from my aunt or grandparents, it’s really just to provide reassurance with something and have a nice excuse to catch up.
Maybe we had different experiences.
Mine was of wasting hours trying to deal with issues with a commercial OS because, despite paying for it, support was nonexistent.
One example: Dell or Microsoft (unsure of the guilty party) pushed a driver update that enabled power saving on WiFi idle by default. That combined with a known bug on my MIL’s WiFi chipset, where it wouldn’t come out of power saving mode. End result was the symptom “the Internet stops working after a while but comes back if I reboot it”.
Guess how much support she got from the retailer who sold her the laptop? Zip, zero, zilch, nada.
You’re not doing free technical support for your relatives, really: you’re doing free technical support for Dell, and Microsoft, and $BIG_RETAILER.
When Windows 11 comes around (her laptop won’t support it) I’m going to upgrade the system to Mint like the rest of my family :) If I’m going to donate my time I’d rather it be to a good cause.
Yes, that was never my experience, and if it had been I would be inclined to agree with you. These days I hear more of “why did I run out of iCloud storage again” or “did this extortion spammer actually hack my email,” which I find less frustrating to answer :)
Yeah it doesn’t matter for generic tech support, in my experience, what OS they’re running.
It’s just the rabbit holes where it’s soul destroying.
Another example was my wife’s laptop. She was a Dell XPS fan for years, and ran Windows. Once again a bad driver got pushed, and her machine took to blue-screening every few minutes. We narrowed it down to the specific Dell driver update. Fixed it by installing Mint :)
Edit: … and she’s now a happy Ryzen Framework 13 user. First non-XPS she’s owned since 2007.
Ugh. It’s not “toxic” to inform people of your real-world limitations.
My brother-in-law is a very experienced mechanic. But there are certain car brands he won’t touch because he doesn’t have the knowledge, equipment, or parts suppliers needed to do any kind of non-trivial work on them. If you were looking at buying a 10-year-old BMW in good shape that just needs a bit of work to be road-worthy, he would say, “Sorry, I can’t help you with that, I just don’t work on those. But if you end up with a Lexus or Acura, maybe we could talk.” He knows from prior experience that ANY time spent working on a car he has no training on would likely either result in wasted time or painting himself into an expensive corner, and everyone involved getting frustrated.
Similarly, my kids would prefer to have Windows laptops, so that they could play all the video games their peers are playing. However, I just simply don’t know how to work on Windows. I don’t have the skills or tools. I haven’t touched Windows in 20 years and forgot most of what I knew back then. I don’t know how to install software (does it have an app store or other repository these days?), I don’t know how to do back ups, I don’t know how to keep their data safe, I don’t know how to fix a broken file system or shared library.
But I can do all of these things on Linux, so they have Linux laptops and get along just fine with them.
Edit: To color this, when I was in my 20’s, I tried very hard to be “the computer guy” to everyone I knew, figuring that it would open doors for me somehow. What happened instead was that I found myself spending large amounts of my own free time trying to fix virus-laden underpowered Celerons, and either getting nowhere, or breaking their systems further because they were already on the edge. Inevitably, the end result was strained (or broken) relationships. Now, when I do someone a favor, I make sure it is something that I know I can actually handle.
But he didn’t force anyone, he clearly says that if those people didn’t want his help, he could just leave it the way it was. To me that’s reasonable - you want my help, sure, but don’t make me do something I’m personally against. It’s like, while working in a restaurant, being asked to prepare meat dishes when being a vegetarian, except that my example is about work and his story is about helping someone, so there’s even less reason to do it against his own beliefs.
From my experience being an unpaid support technician for friends and family, that’s the only reasonably approach. I had multiple situations when people called me to fix the result of someone else’s “work” and expected me to do it for free. It doesn’t work that way. Either I do it for free on my own terms, or you pay me the market rate.
Some examples I remember offhand. In one instance, I tried to teach a person with a malware-infested Windows some basic security practices, created an unprivileged account, and told them how to run things as administrator if they needed to install programs and so on. A few weeks later I was called to find the computer malware-infested again, because they asked someone else to help and he told them that creating a separate administrator account was “nonsense” and gave the user account administrator rights. Well, either you trust me and live more or less malware-free or you trust that guy and live with malware.
In another instance, I installed Linux for someone and put quite some effort into setting things up the way the person wanted. Some time later, they wanted some game but called someone else instead of me to help install it (I almost certainly would be able to make it run in Wine). That someone wiped out all my work and installed Windows to install that game.
People expecting you to be their personal IT team for free just because you “know computers” is just as disrespectful. I don’t think it’s unfair to tell people “no if you want help with your windows system you need to pay someone who actually deals with such things”
This is looking at the things with the current context. Windows nowadays is much more secure and you can basically leave a Windows installation to a normal user and not expect it to explode or something.
However at the time Windows was still the kind of operational system that if you put it on internet without the proper updates, it would be instantly be infected by malware 1. Most users run with admin accounts and it was really easy to get a malware installed by installing a random program, because things like binaries signatures didn’t exist yet. There were also no anti-malware installed by default in Windows, so unless you had some third-party anti-malware installed your computer could quickly become infested with malware. And you couldn’t also just refresh your installation by clicking in one button, you would need to actually format and reinstall everything (that would be annoying because drivers were much less likely to be included in the installation media, so you would need to have another computer that had an internet connection since the freshly installed Windows wouldn’t have any way to connect to internet).
At that time, it would make much more sense to try to convince users to switch to Linux. I did this with my mom for example, switching her computer to Linux since most things she did was accessing the internet. Migrating her to use Linux reduced the amount of support I had to do from once a week to once a month (and instead of having to fix something, it would be in most cases just to update the system).
It should be added that if you helped someone once with his Windows computer, you were considered responsible of every single problem happening on that computer afterward.
In some cases, it was even very strong problem (I remember a computer which was infected by a malware that dialed a very expensive line all the time. That family had a completely crazy phone bill and they had no idea why. Let assure you that they were really happy with Linux for the next 3 or 4 years)
Very much that. It was never the user fault, even if you left the computer in pristine condition, if they had an issue in the same week it was your fault and you would need to fix that.
At the same time, however, it was also much more likely that you needed to deal with an application that would only run on windows, a file format that could only be roundtripped by such an application, a piece of hardware that only worked on windows (remember winmodems? scanners sucked, too, and many printers were windows GDI only), etc.
So convincing someone to use Linux was more likely to cause them a different kind of pain.
Today, most hardware works reasonably with Linux. Printers need to work with iPhones and iPads, and that moved them off the GDI specific things that made them hard to support under Linux. Modems are no longer a thing for most people’s PCs. Proton makes a great many current games work with Linux. Linux browsers are first class. And Linux software handles most common file formats, even in a round trip, very well. So while there’s less need to switch someone to Linux, they’re also less likely to suffer if you do.
That said, I got married in 2002. Right after I got married, I got sent on a contract 2500 miles away from home on a temporary basis. My wife uses computers for office software, calendar, email, web browsing and not much else. She’s a competent user, but not able to troubleshoot very deeply on her own. Since she was working a job she considered temporary (and not career-track) at home, she decided to travel for that contract with me, and we lived in corporate housing. Her home computer at the time was an iMac. It wasn’t practical to bring that and we didn’t want to ship it.
The only spare laptop I had to bring with us so she had something to use for web browsing and job hunting on the road didn’t have a current enough to be trustworthy windows license, so I installed Red Hat 7.3 (not enterprise!) on there for her. She didn’t have any trouble. She’d rather have had a Mac, but we couldn’t have reasonably afforded one at the time. It went fine, but I’d never have dared to try that with someone who didn’t live with me.
Yes, but it really depends on the kinda of user. I wouldn’t just recommend Linux unless I knew that every needs from the user would fit in Linux. For example, for my mom, we had broadhand Ethernet at the time, our printer worked better on Linux than Windows (thanks CUPS!), and the remaining of her tasks were basically done via web browser.
It also helped that she lived with me, for sure ;).
Try to avoid running two processes in the same pod. There are light weight images that can take only a few MBs to run. You wont need a full fledged init system.
NodeJS proposes tini to run NodeJS. Tini is an init system for containers. You can use tini to run multiple processes inside a pod without having to handle signal propagation yourself.
Sometimes, you can’t avoid it.
Some applications are split in multiple processes and still need to be in the same PID namespace or even on the same filesystem to work.
For example, I have an NGINX docker image that runs:
nginx -s reload
This is 3 processes that are tightly coupled. While it is possible to put them in separate Docker images, it would make orchestrating them overly complex.
The Docker image is a single “deployment unit”. Some “units” are made of more than one process.
The reason I care about running multiple processes in a single container is that there are numerous hosting providers these days that charge on a per-container basis.
Google Cloud Run and https://fly.io/ are two examples that I use a lot myself already.
They’re not exactly the same as regular Docker containers - they implement patterns like scale-to-zero, and under the hood they may be using some custom platform such as Firecracker - but the interface they provide to you asks you to build a container to run on them using a Dockerfile.
Very annoying that hosting providers have decided that “container” means the same thing as “VM”, when its much more about running a process in a restricted environment, leading to people having to optimize for this.
Granted, Docker also makes “just use one container for a process” a whole thing. I should be able to define, in a single Dockerfile, a whole process tree (I believe it’s at least possible to get multiple docker images out of a Dockerfile now, at least). Docker compose being its whole separate thing instead of a core part of it makes the whole thing oriented towards making life more legible for infra providers, rather than being a nice layer over “cgroups etc”.
I presume you mean multiple processes in the same container, not a pod? A pod running multiple containers, and thus processes, seems to be perfectly normal.
nit: I believe the link should be https://github.com/krallin/tini
Yes, that’s a correct, thanks.
I was wondering why the author was putting multiple processes in one container as well. I respect “reasons” as a valid reason, and it was a cool article, but the precise “why” would also be nice.
A recent example: I’ve been making a Dockerfile for MTProxy, which is Telegram’s official implementation for a Telegram Proxy. This proxy requires to restart often with fresh configuration downloaded from Telegram’s servers, and this is not done automatically.
This could probably be done by patching the C implementation and do it in-process directly, but the appeal of the Dockerfile is that it doesn’t change anything from the official implementation, and just compiles it and uses it. This way analyzing if I’ve tampered with the proxy is way easier. Just doing a git diff from upstream shows that the only changes are the Dockerfile and the script that manages the restarts and configuration updates.
There are many more examples, so, “reasons” is a good summary :)
https-portal is a good example: it’s a HTTPS proxy, using s6-overlay (mentioned in the article as an alternative solution) for running nginx and cron to automatically renew certificates.
s6-overlay itself explains the motivation for this some more.
I used and loved KDE in the 3.x days for its sheer power and flexiblity. At version 4, they decided to abandon 3.x and spend several years rewriting KDE more or less from scratch. This forced me over to MATE (the GNOME 2 fork) and other GTK-based DEs for a good long while but I would still give KDE an honest try once every year or two. And once every year or two, I would be disappointed by missing functionality or stow-stopping bugs.
In the last few releases of the 5.x, I am happy to report that things have improved dramatically. I’ve been running whatever shipped with Debian 12 since before it was officially released and it’s been nothing but solid and a joy to use. Multi-monitor setups, bluetooth device management, weird networking configs, it’s happy to do it all.
I particularly love that KDE has learned its lesson about trying to reinvent the basic desktop computing paradigm. Instead, they doubled-down on it and have (finally) embraced iterative improvements. EVERY other Linux DE that has gone down the path of reimagining how people are going use their computer has either failed, or ended up with a DE that’s so simple only a child can use it.
If any KDE devs are reading this, know that your core user base is with KDE precisely because you have not given in to the flat, button-less, whitespace-everywhere, mobile-all-the-things mentality that is so very trendy these days.
Every time I’m trying KDE, I’m hitting showstoper bugs, usually random crashes or weird hangs of the UI requiring restarting desktop session. It doesn’t really matter how polished and featureful the thing is, if it isn’t usable.
This sounds a bit to me like graphics driver problems. I had all of that and more with an Nvidia card (different but equally critical issues on both X and Wayland), but with an AMD one for the last ~6 months I’ve not had one glitch.
edit: plasma6 on NixOS in both cases; Wayland working fine for me with AMD so haven’t had to try X.
I lurk on /r/debian and every time someone asks for help with a weird video issue, there’s a better than even chance that they have an nVidia card.
I haven’t hit any major crashes, but every time I’ve used it I could rack up a laundry list of papercut bugs in fifteen minutes of attempting to customize the desktop. In mid 2022 on openSUSE Tumbleweed I was able to get the taskbar to resize a few pixels just by opening a customization menu, get widgets stuck under other widgets, spawn secondary bars that couldn’t be moved, etc.
Oh yeah. The exact same thing happened to me, not long ago.
All these features and customization are great in theory, but in practice it’s just needless complexity.
I find it really depends on the combination of distribution, x11 vs wayland, and kde version. I’ve had good luck with debian – an older version of kde (5.27) but quite stable. I tried plasma6 on both fedora and alpine and found it a bit buggy yet.
I’m looking forward to trying out cosmic desktop once it is stable.
One more thought on this, a lot of people say that the less config is more user friendly and… I think it is more support friendly. So I know a few people who barely do computers at all and one of them said he just wanted youtube on this thing so I put kubuntu on it, youtube works. I came back a couple months later and saw he had changed all kinds of stuff and thought it was cool. Wallpapers, sizes, colors, widgets.
He enjoyed fooling around with the config options. Took me a sec to get oriented though when i had to look at it again!
I have a family member of 80+ years who I migrated to KUbuntu before windows 7 ended. The downside is that she likes to configure her desktop into oblivion. It isn’t the first time I simply delete everything and reset it after a year, so I get back a working home menu.
But overall she is happy and I don’t have to worry about random .exe and .xlsx files. In hindsight I could have bought her a macbook instead. But there was no m1 back then and the UI is far too different from windows. And I think it wouldn’t have survived for very long. Especially since incremental (and locked down) backups are keeping my sanity at least once per year.
That reminds me… I have children and one of the benefits of living in my household is that children get free computers! But the catch is that they have to run Linux because I don’t know how to effectively manage Windows and Macs are too expensive to risk it. My daughter is in high school and is comfortable with technology but is not what you’d call “a computer person.” Even so, she’s been running KDE on Debian for 1.5 years with practically zero assistance from me.
I have one sister who got used to linux because that’s what I gave her. She has now macos, windows and linux available. But apparently she just feels more comfortable with linux.
a few paragraphs later
This is what taskbars are for!
In any case, I’m not a KDE user anymore (I made all my own stuff nowdays with the old blackbox window manager as a base), but I feel much the same way as this author about the alternatives. When you have your own way of doing things, the others keep wanting you to do it their way…. and the worst is that their way changes almost randomly on you. Some update comes down and things change and you’re just kinda stuck with it. Freedom from that is very appealing and the main reason why I use linux at all.
I feel exactly the same way. It’s one thing for a developer to say, “I designed this software to work the best I know how, contributions or forks are welcome,” and quite another to say, “I’ve designed the objectively best UI and everyone else is wrong.” The latter bit of snobbery unfortunately seems to be the trend these days.
KDE stands out among desktop environments for being willing to meet users where they are instead of demanding that users adapt themselves to a specific arbitrary workflow.
Technium by Kevin Kelly (founder of Wired) is worth a browse: https://kk.org/thetechnium/scenius-or-comm/
dimden (founder of nekoweb.org) has an inspiring site: https://dimden.dev
Browsing the nekoweb sites tagged
programming
might yield some interesting stuff: https://nekoweb.org/explore?page=1&sort=lastupd&by=tag&q=programmingNot a site, but The Night Watch is a must-read: https://www.usenix.org/system/files/1311_05-08_mickens.pdf
https://biodigitaljazz.net is my attempt at a pure hacker site. I.e. I have absolutely no goals for this site other than to create interesting things and blog about things that are interesting to my hacker brain.
Back when I was goofing around with programming Arduino Uno in assembly I stumbled on a wonderful blog dedicated to AVR assembly programming. Unfortunately it seems gone now! But Wayback Machine to the rescue: https://web.archive.org/web/20230918214931/http://www.avr-asm-tutorial.net/index.html
This triggered my own personal AUM rule… Always Upvote Mickens