It was about 20 years ago when I started my father on Debian. He needed a web browser, a spreadsheet, and a word processor. And a solitaire game.
He’s been through four computers, each time getting more powerful and cheaper. He still uses Debian exclusively. He uses a web browser, a spreadsheet and a word processor.
I used to upgrade his software when I came over to visit; these days there’s a wireguard tunnel to let me SSH in.
I started my dad too on Linux, around 20 years ago. His work PC had an unstable Ethernet card. It was a driver issue. IT people didn’t help. He was really annoyed. You would be using the PC and the network would freeze randomly. The manufacturer replaced the card, but the issue persisted and everyone shrugged.
We decided to try Linux to see if it helped and it did. He was really delighted. Everything ran so smoothly. He felt empowered. He still uses Linux these days, after retirement. He also learned LyX, which is super useful as a LaTeX frontend.
Twenty years ago, I had an epiphany: Linux was ready for the desktop.
(audience laughs)
(…)
With GNOME 2.0, I felt that Linux was ready for the desktop. It was just really hard to install. And that could be fixed.
Ironically, eleven years ago, the original author of GNOME left us with this little bit of a confession:
Without noticing, I stopped turning on the screen for my Linux machine during 2012. By the time I moved to a new apartment in October of 2012, I did not even bother plugging the machine back and to this date, I have yet to turn it on.
Even during all of my dogfooding and Linux advocacy days, whenever I had to recommend a computer to a single new user, I recommended a Mac. And whenever I gave away computer gifts to friends and family, it was always a Mac. Linux just never managed to cross the desktop chasm.
He’s been working with Microsoft technology for a long time, and even brought a lot of it to Linux. The Linux community rejected it eventually. He must have been feeling like it was an uphill battle? What do you find strange about it? It would have been interesting to have a lively C# ecosystem of Linux desktop software. Instead of GNOME adopting JavaScript, as I believe they are.
I think 20 years ago the Linux Desktop worked better than today. KDE 3 was really nice. Audio still worked. Wifi wasn’t as much of a must have yet. There were some companies porting games to Linux. Distros weren’t constantly tracking and trying to monetize you. There was no in-fighting regarding init systems. You didn’t have that mess of package manger + snap + flatpak. Third party stuff usually compiled per default with ./configure && make && make install. Even Skype just works. The community was a lot less made up of self-promoters. Instead you got quick competent help, just like as a Windows user at the time.
People’s main complaint was that there wasn’t any good video editing and graphics software. Wine worked okay-ish. (for some stuff one used commercial offerings)
The only thing that was a bit messy depending on the distribution were NVIDIA drivers. Maybe Flash Videos, but I think they actually worked.
It even was so good that without much technical background or even English skills one could not just use Linux, but get an old computer, install NetBSD and happily use it on the desktop.
I think the average today (let’s say Ubuntu and Manjaro for example) is that you’ll spend months to glue something up that you can maybe live with. I think parts of the Linux desktop is using design principles that are technically nice, but give users a hard time. An example is that creating something like a short cut used to be easy in desktop environments. Today there is a standard for applications, which is nice, but bugs the user that just wants to create a shortcut. I am not sure what happened to GUIs for creating .desktop files?
I don’t know if it’s fair to say that it was better. Lots of modern problems have their back-in-the-day equivalents. You didn’t have to fight with package manager + snap + flatpak but pre-yum RPM was a pain. Lots of third-party stuff was compiled with imake and haha good luck running it on anything except Red Hat Linux 9.0 or whatever. Skype just worked insofar as the microphone worked, which wasn’t always a given.
What is disappointing and, I think, fair to say, is that we don’t have twenty years’ worth of bugfixes and improvements in today’s desktop stack. For example, there are very few components in Plasma 6 today that haven’t been rewritten since KDE 3, to the point where Plasma 6 is a bit of a misnomer – much of it is actually on its second major release, and it shows. We are, at best, about 10-15 years into the lifetime of current desktop technologies which is why, adjusting for the significantly increased complexity, in terms of stability and capabilities, we’re not much further than where we were in 2007 or so.
I think this ritual burning of all existing technology (wasn’t the first time it happened, either; Gnome 2 and KDE 3 both significantly departed from their predecessors) came at a particularly bad time, because it roughly coincided with the period in which lots of people lost interest in desktop development.
20-25 years ago cool kids dreamed of writing a better window manager, or file manager, or browser, because that’s what was hot. 10-15 years ago cool kids were writing phone apps; if they used Linux, they wrote web apps. So there weren’t as many fresh ideas (and fresh heads) going into desktop development. That made desktop development both slower, as platform complexity grew in every aspect, from font rendering to hardware-accelerated drawing, and much quicker than spare development time, and more divisive.
What is disappointing and, I think, fair to say, is that we don’t have twenty years’ worth of bugfixes and improvements in today’s desktop stack. For example, there are very few components in Plasma 6 today that haven’t been rewritten since KDE 3, to the point where Plasma 6 is a bit of a misnomer – much of it is actually on its second major release, and it shows.
It was. But that’s why I simply didn’t use RPM based systems. Never understood why people like to go through that pain.
Lots of third-party stuff was compiled with imake and haha good luck running it on anything except Red Hat Linux 9.0 or whatever.
Huh? What software?
Skype just worked insofar as the microphone worked, which wasn’t always a given.
As mentioned, never had a problem with that ever. I mean it. I met my girlfriend online during that time. Skype calls even very long ones never had a problem and she was the first person I’d use Skype with. Since that’s how I got to know my girlfriend I know that time vividly. Meanwhile I constantly run into oddities with Discord, Slack and Teams, if they even work at all. Again, not even using Bluetooth. Just an audio jack, so the setup should be simple.
I think this ritual burning of all existing technology (wasn’t the first time it happened, either; Gnome 2 and KDE 3 both significantly departed from their predecessors) came at a particularly bad time, because it roughly coincided with the period in which lots of people lost interest in desktop development.
Not intending to burn existing technology. I have no reason to. Just stating that things that used to work don’t work anymore. Can have lots of (potentially good) reasons. I just also think that the idea that software magically is better today somehow is wrong. And a lot of claims “you just remember wrong” are very very wrong. Same thing with games by the way. I put that to the test. Claims like “You are just older and as a child things are more exciting” are simply wrong in my instances. Just like going back to old utilities. I have no interest of putting new software down in any way. I hate that old vs new software farce. It’s simply whether stuff works or doesn’t.
I’d argue that there is currently not much going on on the Linux desktop side. There are good reasons. People don’t use desktops as much anymore. And they aren’t their main focus. People have phones, apps, smart tvs, etc. Lots of people that would have run Linux back in the day now run macOS. It is a known fact that less people work on desktop environments and when stuff gets more complex, one needs to support a lot more things. On top of that all the development moved into the browser over the last two decades. People don’t really create desktop applications and by extension desktop environments anymore.
So of course if developer focus shifts other stuff will be better in the open source world. Non-tech people can run LLMs on their home PCs when they invest 15 minutes. People share their video collection. There is open source social media that is actually used. Graphics software is a ton better. With Godot there is a really great game engine. People create programming languages. LLVM is amazing. There are finally hobby OSs (Serenty, etc.) again. All of these are really great.
Just that my personal desktop experience was better back then and I think a really big reason for that is that the focus of a desktop was more narrow.
It was. But that’s why I simply didn’t use RPM based systems. Never understood why people like to go through that pain.
You can always not use Snap or Flatpak today. Doesn’t mean no one does, just like it didn’t mean no one used RPM back in the day, too. I don’t use either and I’m definitely happier with my packaging experience than back in 2004-2006-ish (which I’m guessing is the period you mainly have in mind, based on Skype?). (Edit:) I didn’t use RPM-based distros back then, either, so it’s not because of yum :-).
Huh? What software?
The ones I remember most vividly are Maya, Matlab, and… pretty much any GIS software at the time. Same as above – maybe you didn’t use them, doesn’t mean no one needed them. Any desktop will work flawlessly if you only pick applications that happen to work flawlessly :-).
That makes sense. You are right, but you brought up RPM, that’s why my response was about how I never understood it. :-)
Probably I got lucky with software then. I wasn’t use any of this. I got into GIS a bit later, so looks like I just avoided that or package mangers abstracted it away for me.
I think this ritual burning of all existing technology …
Not intending to burn existing technology.
I think x64k was referring to the desktop environment developers as “burning … existing technology” by rewriting their components rather than polishing the old, working ones that you liked.
For example, there are very few components in Plasma 6 today that haven’t been rewritten since KDE 3, to the point where Plasma 6 is a bit of a misnomer – much of it is actually on its second major release,
The interesting thing is that the macOS has still been using the same core tech for the desktop environment during that time period (Quartz, Cocoa, etc.), though recently things like SwiftUI were introduced. I wonder if Apple just got it almost right the first time because they had more experience than the KDE/GNOME folks or whether open source desktops are more affected by every generation wanting to leave their mark/brush up their resumés.
Though I think that an important difference between KDE and GNOME is that KDE development is also more driven by Qt. At some point the owner (I think it was still Troll Tech at the time) of Qt said: Qt Widgets are now legacy and won’t see new development, and KDE had to rebase on QML/Quick, resulting in Plasma.
I wonder if Apple just got it almost right the first time because they had more experience than the KDE/GNOME folks or whether open source desktops are more affected by every generation wanting to leave their mark/brush up their resumés.
It’s a combination of factors, really. Nautilus, for example, was literally written by ex-Apple people, and had many similarities both to file managers of that era and to Finder in particular. So there was certainly no shortage of people to get it right the first time.
IMHO a bigger factor, especially in the last 10-15 years, was the FOSS desktop development is a lot more open-loop. People come up with designs and they just run with them, which is a lot easier to do when you don’t have to worry about supporting multiple install bases, paying customers ditching you and so on.
That’s very useful in many ways but one unfortunate side-effect is that, for all their combative attitude towards closed platforms, major FOSS desktop projects relentlessly chase every fad in closed platforms, too, including the ones that just don’t make sense, or in interpretations that just don’t make sense, like app stores (and I’m not referring to Flathub here; ironically, I think that’s the only platform of this sort that actually makes some sense, at least there’s a special packaging and distribution technology behind it; I’m thinking more of things like the Ubuntu Software Center). App store-like platforms could fulfill a very useful social role, serving e.g. as platforms for donations, bug bounties, feedback etc. – but instead they act just like their closed-source counterparts, with nothing but votes and ratings, to the point where they’re just Synaptic with votes and weird package management bugs.
~20 years ago (my memories might be slightly off), desktop Linux had recently gained the ability to automatically write your X11 modelines, but I still had to manually write my touchpad config. For audio, ALSA was the thing, but also ARTS was a thing, and also ESD was a thing, and OSS was still a thing, and audio worked if you only had one program or sound server having exclusive control of your sound device because anything else was a slow descent into madness.
My goggles are absolutely not rose-tinted, and I’ll recommend a current-day bog standard Ubuntu install any day of the week because of the sheer size of the user base and the amount of info/software targetted towards that ecosystem. And if you are tired of Canonical going their own way every other year, Debian is a fine replacement that mostly works the same way.
~20 years ago (my memories might be slightly off), desktop Linux had recently gained the ability to automatically write your X11 modelines
Oh, yeah, that was a major inflection point in the community as well, as after that point there was no point in running xf86configxorgconfig so it suddenly became impossible to shame the noobs who used that instead of editing XF86Config by hand like Real Men.
For audio, ALSA was the thing, but also ARTS was a thing, and also ESD was a thing, and OSS was still a thing, and audio worked if you only had one program or sound server having exclusive control of your sound device because anything else was a slow descent into madness.
ALSA gained softmixing (and wide hardware support) kind of late, which is what made sound servers useful for a while – and also what led to the following truly hilarious, and quite puzzling bug for newbies.
If ARTS couldn’t claim exclusive control over the soundcard – because, say, XMMS had exclusive control over it through its ALSA output plugin – it didn’t play anything, but it did continue to buffer whatever you sent to it, and would begin to play it as soon as it could claim control over the soundcard. I learned that when I began using kopete (which, for a while, had the best Yahoo! Messenger support) on a fresh install. I hadn’t changed XMMS’ output plugin to ARTS, so none of the “Ping!“s made it to the sound card…
…until I stopped XMMS, at which point ARTS faithfully played every single Kopete alert it had received in the last hour or so (or however long its ring buffer was).
This was actually worse than it sounds – in this case it was just a particular configuration quirk, but the real problem was that not all software supported all sound servers, or at least not very well. E.g. gAIM (which later became Pidgin) supported ARTS but it was a little crashy, and in any case, ARTS support became mainstream among non-KDE software relatively late in the 3.x cycle. Even as late as 3.2, I think, it was just more or less a fact of life that you had one or two applications (especially games) where you kind of lived with the fact that they’d take over your soundcard and silence everything else for a while. Truly a golden age of desktop software :-).
Some things the younger generation of Linux users may not remember:
Wine ran a bunch of games surprisingly well, but installing games that came on several CDs (I remember Jedi Academy) involved an interesting gimmick. I don’t recall if this was in the CD-ROM (ever seen one of those :-)?) drivers or at the VFS layer but in any case, pre-2.6 kernels took real care of data integrity so, uh, you couldn’t eject the CD-ROM tray if the CD was mounted. And you couldn’t unmount it because the installer process was using it. At some point there was a userspace tool that took care of that (I forgot the name, this was 20 years ago after all). Before that, yep, you had to compile your own kernel with some cool patches. If you had the hard-drive space it was easier to rip the CDs (that was kind of magic; you could just dd or cat from /dev/cdrom to a file) and mount all of them, but not everyone had that kind of space.
If you had a winmodem, you were usually doomed. However, “real” modems were really expensive and, due to the enormous success of winmodems, they got kind of difficult to find by the early ’00s.
Hardware support lag was a lot more substantial than today. People really underestimate how important community growth turned out to be. When I finally got a “real” computer I ran it with the hard drive in compatibility mode because it took forever for Linux to get both proper S-ATA support and support for… I think OCH5 it was?
Actually, I loved that behavior and would use it deliberately to queue things. I resisted moving to ALSA for quite some time because I liked how things blocked. Oh well.
(I started with Linux in 2004 btw, not before. It was solidly ok, I think I dodged much of the pain people talk about/)
Well, sure, it was fun if it was the IM client that blocked. Not as fun when it was an actually important alert, or when the browser queued some earbleed noise from a Flash intro, or when you couldn’t listen to music or watch a movie until you quit the offending app.
ALSA gained softmixing (and wide hardware support) kind of late, which is what made sound servers useful for a while – and also what led to the following truly hilarious, and quite puzzling bug for newbies.
ALSA softmixing STILL doesn’t work. I tried running a server-less setup a few years ago. Applications just took exclusive control anyway.
It works just fine, I still use it today. You might need to configure it though; distro configs based on PulseAudio tend not to enable it since they assume PA is doing it anyway.
/etc/asound.conf will define things based on dmix (for playback) and dsnoop (for recording) if alsa mixing is enabled. and the pcm.default will refer to that pseudodevice instead of the hardware.
It’s been a while so I don’t remember the details, but I think what happened was that apulse wasn’t really working, so I used the built in ALSA support in Firefox (code is still there last I checked, just disabled in the default build config) which took exclusive control.
I don’t know how that’s handled post-PulseAudio or how well it works. But I am 100% sure it worked. The only reason I stopped using it was that PulseAudio became a dependency pretty much everywhere so yanking it out and dealing with the fallout was about as much trouble as using it.
I find this to be just not true. While Linux today is undoubtedly comprised of more complex subsystems like pipewire and systemd, it allows you to effortlessly use bluetooth headsets, play hiend games with comparable performance and even (and this was unthinkable back then) do music production with incredibly full featured software. Maybe the simplicity of yore was enjoyable but linux today is a lot more capable.
I think you have a pretty skewed picture and I’m happy it worked that well for you back then.
I certainly had linux on the desktop in the late 90s, but it just wasn’t great. Our shared computer pool at university (I started in 2003) worked perfectly fine, but it was curated hardware and some people put in some real effort for it.
I bought my first laptop in 2004, a friend had the predecessor model, that’s why I knew it was relatively ok for Linux… and yet I moved to FreeBSD because of a couple things (one of them wifi) it just wasn’t great if you wanted to have it “just work”[tm].
Compare to today, people were kinda surprised when I said that I have no sound on my desktop, although games run at 120 FPS out of the box with the 3070. Turns out it’s a commonly known problem with this exact mainboard chipset, and plugging in the only USB sound card I ever owned.. it just works. All I’m saying is that I have not had proper problems for about 10 years (which weren’t solved easily) - but earlier than like 15 years ago everything took a lot of time to get running smoothly… that’s my experience.
FWIW, I agree much more with your original post than with the comment I replied to.
I guess my main point is that while I’m not averse to configuring stuff, I’ve always held the view that you should be able to do it in a reasonable time with a modest amount of knowledge. And very often the drivers simply weren’t there, so without switching hardware you were just out of luck, and it was not rare.
And what makes you think that? Sounds a bit like an idle claim. ;)
I certainly had linux on the desktop in the late 90s
20 years ago was 2004, not the 90s. I used Linux as my main OS back then. Was shortly before I had a long run of NetBSD. Double checked some old emails, messages.
but it was curated hardware
Mine was on a system that was by a local hardware store owned brand. That NetBSD was installed on a computer from a government organization sale.
Compare to today, people were kinda surprised when I said that I have no sound on my desktop
I do. Today audio burns through CPU cycles, is wonky, on some system out of the box has buffer underruns, randomly kills YouTube, comes out of my speakers rather than my (audio jacked, so not even bluetooth) when I reboot. I never used to have any audio problems back then. Not with games, not with Skype.
games run at 120 FPS
Meanwhile my games (and I played a lot of games back then being young) need stuff like gamemoderun to be playable. Back then Enemy Territory (with so many mods), Majesty, Neverwinter Nights, etc. as well worked out of the box.
Of course it’s my experience, and my view was shared by about basically everyone I know who ran Linux on the desktop back then. I didn’t say it was terrible or your overall point is wrong. I just don’t believe that many people thought that everything was just fine and worked by default for the majority.
Maybe I’m focusing too much on getting stuff to run at all (which sucks if anyone changed anything in the kernel or in general upstream), and you’re focusing too much on problems today. It’s never perfect ;)
Now that KDE is stable again, I think we’re just about back to where we were 20 years ago. Only it’s Zoom instead of Skype. And nVidia drivers are still buggy. :)
I agree a bit, but I feel like Linux back then was generally more work - if nothing else, it required a lot more understanding of how it worked.
Very few people back then were running Linux in a VM, and they certainly weren’t using WSL or a container - most people had to install it on real hardware, and there was usually a bit of learning curve just getting the system to boot into Linux for the first time and getting drivers setup.
I’m currently running Debian on an old MacBook Pro, and it reminds me a lot of using Linux 20 years ago. Everything’s working - I can video chat with Microsoft teams, I have accellerated 3D graphics (with NVidia until a few months ago), etc., but it was work to get it up and running. Proprietary drivers had to be tracked down, some special kernel modules had to be built from source, some magic incantations had to be added to the kernel command line, etc.
Nowadays when I have to do that, it’s a real inconvenience - 20 years ago it was just expected.
Audio works better today than it ever has on Linux.
There was no in-fighting regarding init systems.
There is no real actual in-fighting regarding init systems today. There are three groups: those who just use systemd, those who have reasons to not use systemd and aren’t weird about it, and losers who no one cares about.
Actually, since you mention both tracking and snap, how many of the problems you have are just Ubuntu-specific, not modern desktop Linux specific?
There is no real actual in-fighting regarding init systems today. There are three groups: those who just use systemd, those who have reasons to not use systemd and aren’t weird about it, and losers who no one cares about.
That makes sense. I think the reason is the same as for RPM was back then though. There is that stuff that big companies introduce and the smaller players kind of have to adapt, because being able to pay means you can outpace them. Some people aren’t happy about that. It might be why they switch away from Windows for example. While I think there is a lot of people that fight some ideological fight and systemd is the target, I’d argue that even the “losers” will give you a reason. Whether it’s a good reason or not is a different question of course.
Actually, since you mention both tracking and snap, how many of the problems you have are just Ubuntu-specific, nor modern desktop Linux specific?
Audio and Video are my primary ones. I am mainly on Arch, assuming I made mistakes only to find out that when I get a work/client laptop with Ubuntu, etc. it also has issues, even though I am not sure they are related. Different OS, different issues.
Most recent: Manjaro. Thinkpad T14s. Out of the box. My stuff moves to a different screen simply because my monitors goes to sleep. Sometimes playing videos on YouTube freezes the screen it’s playing on for a couple of minutes. Switching my audio output does work sometimes, sometimes not.
I have had the freezing on Ubuntu (which was the standard by the company) before. Instead of stuff moving to the other screen I had instances, where I had graphical artifacts. And instead of audio output not switching when explicitly selecting it I had issues with it not switching when I start the system with headphones already plugged in.
20 years ago I was able to get stuff done without such issues.
I also don’t have issues on other OSs, not on Windows, not on OpenBSD. I checked during debugging.
I am not the only one with those issues, however fixes don’t work. No specific errors in journalctl/dmesg. People have been reporting these issues of course. Some had other causes. Some changed window manager, some switched hardware, some switched between wayland/xorg (both ways actually), etc.
I have hopes that these will be fixed eventually, but the whole point of the above was that for my use cases 20 years ago the average Linux distribution of the time did a better job of what I expect. Of course the story might be different for other people, but I don’t think I need to mention that.
I installed Xubuntu on my partner’s laptop a while ago and they’ve been very happy with the switch, save for when the time came to update.
They’ve been using this laptop from 2013 with 4GB of RAM and a Windows 10 installation on an HDD, which was atrociously slow. It took several minutes to reach the user select screen, and after logging in it was 5 more minutes for Windows to stop spinning the disk before the system was usable.
After trying to alleviate the pain a couple times I figured it was time to move the system to an SSD and, after figuring out my partner’s needs and with their consent, I also installed Xubuntu on the laptop. Haven’t heard any complaints other than the occasional snag with sleep/hibernation, but since the laptop rebooted in a couple seconds they weren’t a big issue.
The big pain point in my experience is still the updates: the latest dist-upgrade from 22.04 to 24.04 failed partway through for some reason I haven’t yet determined, and left them with an unbootable system. I had to come in with an Ubuntu USB and chroot into the installation to make it finish the upgrade and fix some packages that were uninstalled for some reason. Now the laptop works fine but they’re experiencing random freezes (probably due to the nouveau driver or some faulty hardware). I could probably fix it but we’re kind of using it as an excuse to get a newer and less bulky laptop, so I guess that worked out in our favour :)
The next laptop will also have Linux on it, but I’m gonna install an immutable distro this time.
I also recently had an issue with my NixOS laptop after an upgrade where it would freeze on high CPU loads and didn’t come back from sleep (my fault for setting the kernel version to latest, lol), and I urgently needed it to work for an event I was attending later that evening. I really appreciated that I could boot into the previous generation that still worked and resolve the issue later.
Maybe I’m just high on the Nix pill, but I think immutable distros are a huge improvement in usability for everyone using Linux. Things still fail from time to time, and being able to temporarily switch back to a system that still works until the issue is fixed is one of the missing pieces to bringing Linux to the masses, at least in my opinion.
“I started to always have two CD-ROMs with me: the latest Knoppix and Debian Woody.”, I to this day, always have a booting Ubuntu key to my keyring; you’ll never know.
Me too! I was once in my university library with a broken Gentoo install and managed to save it before a deadline with a live USB that I only had on my keyring by coincidence. Now I always keep one on me (and no longer use Gentoo).
i’m on nixos and it’s ruined my linux experiences (For the better). I no longer have to search “How to X on Y” like “How to get pulse audio working on ubuntu” .. In nixos, you just set the right variables and carry on. I had no audio on my thinkpad and just copied the audio lines from my nixos config and it just worked.
Yes. The nice thing about NixOS is that you can make dramatic changes to your configuation without fear of breaking things or worry about leaving clutter behind. That’s hard to give up once you have enjoyed it!
PulseAudio is not that bad anymore. But, regardless, nixos-unstable already enabled PipeWire by default. Works well, latency is pretty low. CPU usage could be a bit lower, though.
Tired of being an unpaid Microsoft support technician, I offered people to install Linux on their computer, with my full support, or to never talk with me about their computer any more.
The more time went by, the more I realised that this state of mind was particularly toxic and ultimately disrespectful of the real needs of the people around us.
I’m using the word ‘we’ here because obviously, I also had this approach at the time (admittedly, a few years later, being a bit younger), but I’m a bit ashamed of the approach I had at the time and today I have a deep rejection for this way of behaving towards a public who often use IT tools for specific needs and who shouldn’t become dependent on a certain type of IT support that isn’t necessarily available elsewhere.
Who are we to put so much pressure on people to change almost their entire digital environment? Even more so at a time when tools were not as widely available online as they are today.
In short, I’m quite fascinated by those who* are proud to have done this at the time, and still are today, even in the name of ‘liberating’ (often in spite of themselves) users who don’t really understand the ins and outs of such a migration.
[*] To be clear, I’m not convinced, given the tone of the blogpost, that the author of this blog post does!
The more time went by, the more I realised that this state of mind was particularly toxic and ultimately disrespectful of the real needs of the people around us.
Can we please stop throwing around the word “toxic” for things are totally normal human interactions. Nobody is obliged to do free work for a product they neither bought themselves nor use nor like.
The “or never talk to me about your computer anymore” if you don’t run it the way I tell you to part, is, IMO, not normal or nice. I’m not sure I’d have called it toxic, but I’d have called it unpleasant and insensitive.
Of course nobody is obliged to do free work for a product they don’t purchase, use or like. That’s normal. But you can express sympathy to your friends and family who are struggling with a choice they made for reasons that seemed good or necessary to them, even if you don’t agree that it was a good choice. It’s normal listen to them talk about their challenges, etc., without needing to solve them yourself. You can even gently remind them that if they did things a different way, you could help but that you don’t understand their system choice well enough to help them meet their goals with it.
The problem is telling a friend or loved one not to talk to you about their struggles. Declining to work on a system you don’t purchase, use or like, is of course normal and not a problem.
I’ve used Linux on my own machines exclusively since 1999, and when I get asked to deal with computer problems (that aren’t related to hardware, networking or “basic computer literacy” skills) I can’t help with, I’ll usually say something along the lines of “you know, you actually probably know more about running a Windows machine than I do” - which doesn’t usually get interpreted as uncaring or insulting, and is also generally true.
If you buy a car that needs constant repairs you get rid of it and buy something else that does not require it. There is no need to sit with family/friends and discuss their emotianal journey working with a Windows computer. it is a thing. If it is broken have it repaired or buy something else.
If you buy a car that needs constant repairs you get rid of it and buy something else that does not require it.
You might. Or you might think that even though you’ve had to fix door closing sensor on that minivan’s automatic door 6 times now, no other style of vehicle meets your family’s current needs, and while those sensors are known to be problematic across the entire industry, there’s not a better move for you right now. And the automatic door closing function is useful to you the 75+% of the time that it works.
And you still might vent to your friend who’s a car guy about how annoying the low quality of the sensor is, or about the high cost of getting it replaced each time it fails.
Your friend telling you “don’t talk to me about this unless you suck it up and get a truck instead” would be insensitive, unpleasant and might even be considered by some to be toxic.
It’s not an emotional journey. You’re not asking your friend to fix it. You’re venting. A normal response from the friend would be “yeah, it’s no fun to deal with that.” Or “I’d know how to fix that on a truck, but I have no idea about a minivan.”
–
edit to add: For those who aren’t familiar with modern minivans, they have error prone sensors on the rear doors that are intended to prevent them from closing on small fingers. To close the doors when those fail, it’s a cumbersome process that involves disabling the automatic door function from the driver’s area with the car started, then getting out and closing the door manually. It’s a pain, and if your sensor fails and your family is such that you use the rear seats regularly, you’ll fix it if you value your sanity.
As a sometimes erstwhile unpaid support technician, I vehemently disagree.
I fully admit that sometimes I stepped into that unpaid support technician role when I could have totally, in a kind, socially acceptable way, said “Wow, it’s miserable that your computer broke. You should talk to {people you bought it from}. I can tell you a lot about computing in general, but they’ll know a lot more about Windows than I would.”
And it would’ve been OK, because the people telling me about their problems were mostly venting, not really looking for a solution from me.
But as a problem solver, I’m conditioned to think that someone telling me about an issue is looking for a solution from me. That’s not so; it’s my bias and orientation toward fixing this kind of thing that makes me think so.
Thank you, you’ve put into much better words what I wanted to say than the adjective ‘toxic’, which was the only one I had to hand when I wanted to describe all this.
How on hell could it be considered as toxic to refuse to support something which is against your values, which requires you a lot of work, which is unpaid while still offering to provide a solution to the initial problem ?
All the people I’ve converted to Linux were really happy for at least several years (because, of course, I was not migrating someone without a lot of explanations and without studying their real needs).
The only people who had problem afterward were people who had another “unpaid microsoft technician” doing stuff behind my back. I mean I had been called by a old lady because her Linux was not working as she expected only to find out that one of her grand-children had deleted the Linux partition and did a whole new Windows XP install without any explanation.
First of all it is obviously your choice whether you want to give support for a system you don’t enjoy and may not have as much experience with. Especially when you could expect the vendors of that system to help, instead of you.
But the second part is how you express this: You are - after all - the expert getting asked about providing support. And so your answer might lead them down a route where they choose linux, even though it is a far worse experience for the requirements of the person asking for help.
The last point comes from the second: You have to accept that installing linux is for those people not something they can support on their own. If they couldn’t fix their windows problems, installing linux will at best keep it to the same level. Realistically they now have n+1 problems. And now they are 100% reliant upon you - the single linux expert they actually know for their distribution. And if you’re not there, they are royally fucked with getting their damn printer running again. Or their nVidia GPU freezing the browser. Or Teams not working as well with their camera. In another context you could say you secured your job. And if only because updates on windows are at least 100% happening, which is just not true on linux.
I have seen people with a similar attitude installing rolling releases for other people while disabling updates over more than 6 months, because they didn’t have the time to care about all the regular breakage. And yes that includes the browser.
And the harsh truth is that for many people that printer driver, MS Office, Teams + Zoom and Camera is the reason they have this computer in the first place. So accepting their needs can include “Sorry I am not able to help with that” while also accepting that even mentioning linux to them is a bad idea.
Because it’s a nice thing to do for your family and friends and they’ll likely reciprocate if you need help with something different. Half of the time when I get a “tech support” call from my aunt or grandparents, it’s really just to provide reassurance with something and have a nice excuse to catch up.
Mine was of wasting hours trying to deal with issues with a commercial OS because, despite paying for it, support was nonexistent.
One example: Dell or Microsoft (unsure of the guilty party) pushed a driver update that enabled power saving on WiFi idle by default. That combined with a known bug on my MIL’s WiFi chipset, where it wouldn’t come out of power saving mode. End result was the symptom “the Internet stops working after a while but comes back if I reboot it”.
Guess how much support she got from the retailer who sold her the laptop? Zip, zero, zilch, nada.
You’re not doing free technical support for your relatives, really: you’re doing free technical support for Dell, and Microsoft, and $BIG_RETAILER.
When Windows 11 comes around (her laptop won’t support it) I’m going to upgrade the system to Mint like the rest of my family :) If I’m going to donate my time I’d rather it be to a good cause.
Yes, that was never my experience, and if it had been I would be inclined to agree with you. These days I hear more of “why did I run out of iCloud storage again” or “did this extortion spammer actually hack my email,” which I find less frustrating to answer :)
Yeah it doesn’t matter for generic tech support, in my experience, what OS they’re running.
It’s just the rabbit holes where it’s soul destroying.
Another example was my wife’s laptop. She was a Dell XPS fan for years, and ran Windows. Once again a bad driver got pushed, and her machine took to blue-screening every few minutes. We narrowed it down to the specific Dell driver update. Fixed it by installing Mint :)
Edit: … and she’s now a happy Ryzen Framework 13 user. First non-XPS she’s owned since 2007.
Ugh. It’s not “toxic” to inform people of your real-world limitations.
My brother-in-law is a very experienced mechanic. But there are certain car brands he won’t touch because he doesn’t have the knowledge, equipment, or parts suppliers needed to do any kind of non-trivial work on them. If you were looking at buying a 10-year-old BMW in good shape that just needs a bit of work to be road-worthy, he would say, “Sorry, I can’t help you with that, I just don’t work on those. But if you end up with a Lexus or Acura, maybe we could talk.” He knows from prior experience that ANY time spent working on a car he has no training on would likely either result in wasted time or painting himself into an expensive corner, and everyone involved getting frustrated.
Similarly, my kids would prefer to have Windows laptops, so that they could play all the video games their peers are playing. However, I just simply don’t know how to work on Windows. I don’t have the skills or tools. I haven’t touched Windows in 20 years and forgot most of what I knew back then. I don’t know how to install software (does it have an app store or other repository these days?), I don’t know how to do back ups, I don’t know how to keep their data safe, I don’t know how to fix a broken file system or shared library.
But I can do all of these things on Linux, so they have Linux laptops and get along just fine with them.
Edit: To color this, when I was in my 20’s, I tried very hard to be “the computer guy” to everyone I knew, figuring that it would open doors for me somehow. What happened instead was that I found myself spending large amounts of my own free time trying to fix virus-laden underpowered Celerons, and either getting nowhere, or breaking their systems further because they were already on the edge. Inevitably, the end result was strained (or broken) relationships. Now, when I do someone a favor, I make sure it is something that I know I can actually handle.
But he didn’t force anyone, he clearly says that if those people didn’t want his help, he could just leave it the way it was. To me that’s reasonable - you want my help, sure, but don’t make me do something I’m personally against. It’s like, while working in a restaurant, being asked to prepare meat dishes when being a vegetarian, except that my example is about work and his story is about helping someone, so there’s even less reason to do it against his own beliefs.
From my experience being an unpaid support technician for friends and family, that’s the only reasonably approach. I had multiple situations when people called me to fix the result of someone else’s “work” and expected me to do it for free. It doesn’t work that way. Either I do it for free on my own terms, or you pay me the market rate.
Some examples I remember offhand. In one instance, I tried to teach a person with a malware-infested Windows some basic security practices, created an unprivileged account, and told them how to run things as administrator if they needed to install programs and so on. A few weeks later I was called to find the computer malware-infested again, because they asked someone else to help and he told them that creating a separate administrator account was “nonsense” and gave the user account administrator rights. Well, either you trust me and live more or less malware-free or you trust that guy and live with malware.
In another instance, I installed Linux for someone and put quite some effort into setting things up the way the person wanted. Some time later, they wanted some game but called someone else instead of me to help install it (I almost certainly would be able to make it run in Wine). That someone wiped out all my work and installed Windows to install that game.
People expecting you to be their personal IT team for free just because you “know computers” is just as disrespectful. I don’t think it’s unfair to tell people “no if you want help with your windows system you need to pay someone who actually deals with such things”
The more time went by, the more I realised that this state of mind was particularly toxic and ultimately disrespectful of the real needs of the people around us.
This is looking at the things with the current context. Windows nowadays is much more secure and you can basically leave a Windows installation to a normal user and not expect it to explode or something.
However at the time Windows was still the kind of operational system that if you put it on internet without the proper updates, it would be instantly be infected by malware 1. Most users run with admin accounts and it was really easy to get a malware installed by installing a random program, because things like binaries signatures didn’t exist yet. There were also no anti-malware installed by default in Windows, so unless you had some third-party anti-malware installed your computer could quickly become infested with malware. And you couldn’t also just refresh your installation by clicking in one button, you would need to actually format and reinstall everything (that would be annoying because drivers were much less likely to be included in the installation media, so you would need to have another computer that had an internet connection since the freshly installed Windows wouldn’t have any way to connect to internet).
At that time, it would make much more sense to try to convince users to switch to Linux. I did this with my mom for example, switching her computer to Linux since most things she did was accessing the internet. Migrating her to use Linux reduced the amount of support I had to do from once a week to once a month (and instead of having to fix something, it would be in most cases just to update the system).
It should be added that if you helped someone once with his Windows computer, you were considered responsible of every single problem happening on that computer afterward.
In some cases, it was even very strong problem (I remember a computer which was infected by a malware that dialed a very expensive line all the time. That family had a completely crazy phone bill and they had no idea why. Let assure you that they were really happy with Linux for the next 3 or 4 years)
It should be added that if you helped someone once with his Windows computer, you were considered responsible of every single problem happening on that computer afterward.
Very much that. It was never the user fault, even if you left the computer in pristine condition, if they had an issue in the same week it was your fault and you would need to fix that.
However at the time Windows was still the kind of operational system that if you put it on internet without the proper updates, it would be instantly be infected by malware.
At the same time, however, it was also much more likely that you needed to deal with an application that would only run on windows, a file format that could only be roundtripped by such an application, a piece of hardware that only worked on windows (remember winmodems? scanners sucked, too, and many printers were windows GDI only), etc.
So convincing someone to use Linux was more likely to cause them a different kind of pain.
Today, most hardware works reasonably with Linux. Printers need to work with iPhones and iPads, and that moved them off the GDI specific things that made them hard to support under Linux. Modems are no longer a thing for most people’s PCs. Proton makes a great many current games work with Linux. Linux browsers are first class. And Linux software handles most common file formats, even in a round trip, very well. So while there’s less need to switch someone to Linux, they’re also less likely to suffer if you do.
That said, I got married in 2002. Right after I got married, I got sent on a contract 2500 miles away from home on a temporary basis. My wife uses computers for office software, calendar, email, web browsing and not much else. She’s a competent user, but not able to troubleshoot very deeply on her own. Since she was working a job she considered temporary (and not career-track) at home, she decided to travel for that contract with me, and we lived in corporate housing. Her home computer at the time was an iMac. It wasn’t practical to bring that and we didn’t want to ship it.
The only spare laptop I had to bring with us so she had something to use for web browsing and job hunting on the road didn’t have a current enough to be trustworthy windows license, so I installed Red Hat 7.3 (not enterprise!) on there for her. She didn’t have any trouble. She’d rather have had a Mac, but we couldn’t have reasonably afforded one at the time. It went fine, but I’d never have dared to try that with someone who didn’t live with me.
At the same time, however, it was also much more likely that you needed to deal with an application that would only run on windows, a file format that could only be roundtripped by such an application, a piece of hardware that only worked on windows (remember winmodems? scanners sucked, too, and many printers were windows GDI only), etc.
Yes, but it really depends on the kinda of user. I wouldn’t just recommend Linux unless I knew that every needs from the user would fit in Linux. For example, for my mom, we had broadhand Ethernet at the time, our printer worked better on Linux than Windows (thanks CUPS!), and the remaining of her tasks were basically done via web browser.
It went fine, but I’d never have dared to try that with someone who didn’t live with me.
It also helped that she lived with me, for sure ;).
2004… I looked it up and it seems that I got out of Desktop Linux end of 2005 never to look back again. It was fun and I learned a lot but not particularly eager to go back.
It was about 20 years ago when I started my father on Debian. He needed a web browser, a spreadsheet, and a word processor. And a solitaire game.
He’s been through four computers, each time getting more powerful and cheaper. He still uses Debian exclusively. He uses a web browser, a spreadsheet and a word processor.
I used to upgrade his software when I came over to visit; these days there’s a wireguard tunnel to let me SSH in.
I started my dad too on Linux, around 20 years ago. His work PC had an unstable Ethernet card. It was a driver issue. IT people didn’t help. He was really annoyed. You would be using the PC and the network would freeze randomly. The manufacturer replaced the card, but the issue persisted and everyone shrugged.
We decided to try Linux to see if it helped and it did. He was really delighted. Everything ran so smoothly. He felt empowered. He still uses Linux these days, after retirement. He also learned LyX, which is super useful as a LaTeX frontend.
Ironically, eleven years ago, the original author of GNOME left us with this little bit of a confession:
https://tirania.org/blog/archive/2013/Mar-05.html
(I don’t necessarily agree, it’s just stuck in my head)
I don’t know what happened to Miguel. It’s a strange tale.
He’s been working with Microsoft technology for a long time, and even brought a lot of it to Linux. The Linux community rejected it eventually. He must have been feeling like it was an uphill battle? What do you find strange about it? It would have been interesting to have a lively C# ecosystem of Linux desktop software. Instead of GNOME adopting JavaScript, as I believe they are.
I think 20 years ago the Linux Desktop worked better than today. KDE 3 was really nice. Audio still worked. Wifi wasn’t as much of a must have yet. There were some companies porting games to Linux. Distros weren’t constantly tracking and trying to monetize you. There was no in-fighting regarding init systems. You didn’t have that mess of package manger + snap + flatpak. Third party stuff usually compiled per default with
./configure && make && make install
. Even Skype just works. The community was a lot less made up of self-promoters. Instead you got quick competent help, just like as a Windows user at the time.People’s main complaint was that there wasn’t any good video editing and graphics software. Wine worked okay-ish. (for some stuff one used commercial offerings)
The only thing that was a bit messy depending on the distribution were NVIDIA drivers. Maybe Flash Videos, but I think they actually worked.
It even was so good that without much technical background or even English skills one could not just use Linux, but get an old computer, install NetBSD and happily use it on the desktop.
I think the average today (let’s say Ubuntu and Manjaro for example) is that you’ll spend months to glue something up that you can maybe live with. I think parts of the Linux desktop is using design principles that are technically nice, but give users a hard time. An example is that creating something like a short cut used to be easy in desktop environments. Today there is a standard for applications, which is nice, but bugs the user that just wants to create a shortcut. I am not sure what happened to GUIs for creating .desktop files?
I don’t know if it’s fair to say that it was better. Lots of modern problems have their back-in-the-day equivalents. You didn’t have to fight with package manager + snap + flatpak but pre-yum RPM was a pain. Lots of third-party stuff was compiled with
imake
and haha good luck running it on anything except Red Hat Linux 9.0 or whatever. Skype just worked insofar as the microphone worked, which wasn’t always a given.What is disappointing and, I think, fair to say, is that we don’t have twenty years’ worth of bugfixes and improvements in today’s desktop stack. For example, there are very few components in Plasma 6 today that haven’t been rewritten since KDE 3, to the point where Plasma 6 is a bit of a misnomer – much of it is actually on its second major release, and it shows. We are, at best, about 10-15 years into the lifetime of current desktop technologies which is why, adjusting for the significantly increased complexity, in terms of stability and capabilities, we’re not much further than where we were in 2007 or so.
I think this ritual burning of all existing technology (wasn’t the first time it happened, either; Gnome 2 and KDE 3 both significantly departed from their predecessors) came at a particularly bad time, because it roughly coincided with the period in which lots of people lost interest in desktop development.
20-25 years ago cool kids dreamed of writing a better window manager, or file manager, or browser, because that’s what was hot. 10-15 years ago cool kids were writing phone apps; if they used Linux, they wrote web apps. So there weren’t as many fresh ideas (and fresh heads) going into desktop development. That made desktop development both slower, as platform complexity grew in every aspect, from font rendering to hardware-accelerated drawing, and much quicker than spare development time, and more divisive.
You’ve put this so well, thank you
It was. But that’s why I simply didn’t use RPM based systems. Never understood why people like to go through that pain.
Huh? What software?
As mentioned, never had a problem with that ever. I mean it. I met my girlfriend online during that time. Skype calls even very long ones never had a problem and she was the first person I’d use Skype with. Since that’s how I got to know my girlfriend I know that time vividly. Meanwhile I constantly run into oddities with Discord, Slack and Teams, if they even work at all. Again, not even using Bluetooth. Just an audio jack, so the setup should be simple.
Not intending to burn existing technology. I have no reason to. Just stating that things that used to work don’t work anymore. Can have lots of (potentially good) reasons. I just also think that the idea that software magically is better today somehow is wrong. And a lot of claims “you just remember wrong” are very very wrong. Same thing with games by the way. I put that to the test. Claims like “You are just older and as a child things are more exciting” are simply wrong in my instances. Just like going back to old utilities. I have no interest of putting new software down in any way. I hate that old vs new software farce. It’s simply whether stuff works or doesn’t.
I’d argue that there is currently not much going on on the Linux desktop side. There are good reasons. People don’t use desktops as much anymore. And they aren’t their main focus. People have phones, apps, smart tvs, etc. Lots of people that would have run Linux back in the day now run macOS. It is a known fact that less people work on desktop environments and when stuff gets more complex, one needs to support a lot more things. On top of that all the development moved into the browser over the last two decades. People don’t really create desktop applications and by extension desktop environments anymore.
So of course if developer focus shifts other stuff will be better in the open source world. Non-tech people can run LLMs on their home PCs when they invest 15 minutes. People share their video collection. There is open source social media that is actually used. Graphics software is a ton better. With Godot there is a really great game engine. People create programming languages. LLVM is amazing. There are finally hobby OSs (Serenty, etc.) again. All of these are really great.
Just that my personal desktop experience was better back then and I think a really big reason for that is that the focus of a desktop was more narrow.
You can always not use Snap or Flatpak today. Doesn’t mean no one does, just like it didn’t mean no one used RPM back in the day, too. I don’t use either and I’m definitely happier with my packaging experience than back in 2004-2006-ish (which I’m guessing is the period you mainly have in mind, based on Skype?). (Edit:) I didn’t use RPM-based distros back then, either, so it’s not because of yum :-).
The ones I remember most vividly are Maya, Matlab, and… pretty much any GIS software at the time. Same as above – maybe you didn’t use them, doesn’t mean no one needed them. Any desktop will work flawlessly if you only pick applications that happen to work flawlessly :-).
That makes sense. You are right, but you brought up RPM, that’s why my response was about how I never understood it. :-)
Probably I got lucky with software then. I wasn’t use any of this. I got into GIS a bit later, so looks like I just avoided that or package mangers abstracted it away for me.
I think x64k was referring to the desktop environment developers as “burning … existing technology” by rewriting their components rather than polishing the old, working ones that you liked.
The interesting thing is that the macOS has still been using the same core tech for the desktop environment during that time period (Quartz, Cocoa, etc.), though recently things like SwiftUI were introduced. I wonder if Apple just got it almost right the first time because they had more experience than the KDE/GNOME folks or whether open source desktops are more affected by every generation wanting to leave their mark/brush up their resumés.
Though I think that an important difference between KDE and GNOME is that KDE development is also more driven by Qt. At some point the owner (I think it was still Troll Tech at the time) of Qt said: Qt Widgets are now legacy and won’t see new development, and KDE had to rebase on QML/Quick, resulting in Plasma.
It’s a combination of factors, really. Nautilus, for example, was literally written by ex-Apple people, and had many similarities both to file managers of that era and to Finder in particular. So there was certainly no shortage of people to get it right the first time.
IMHO a bigger factor, especially in the last 10-15 years, was the FOSS desktop development is a lot more open-loop. People come up with designs and they just run with them, which is a lot easier to do when you don’t have to worry about supporting multiple install bases, paying customers ditching you and so on.
That’s very useful in many ways but one unfortunate side-effect is that, for all their combative attitude towards closed platforms, major FOSS desktop projects relentlessly chase every fad in closed platforms, too, including the ones that just don’t make sense, or in interpretations that just don’t make sense, like app stores (and I’m not referring to Flathub here; ironically, I think that’s the only platform of this sort that actually makes some sense, at least there’s a special packaging and distribution technology behind it; I’m thinking more of things like the Ubuntu Software Center). App store-like platforms could fulfill a very useful social role, serving e.g. as platforms for donations, bug bounties, feedback etc. – but instead they act just like their closed-source counterparts, with nothing but votes and ratings, to the point where they’re just Synaptic with votes and weird package management bugs.
~20 years ago (my memories might be slightly off), desktop Linux had recently gained the ability to automatically write your X11 modelines, but I still had to manually write my touchpad config. For audio, ALSA was the thing, but also ARTS was a thing, and also ESD was a thing, and OSS was still a thing, and audio worked if you only had one program or sound server having exclusive control of your sound device because anything else was a slow descent into madness.
My goggles are absolutely not rose-tinted, and I’ll recommend a current-day bog standard Ubuntu install any day of the week because of the sheer size of the user base and the amount of info/software targetted towards that ecosystem. And if you are tired of Canonical going their own way every other year, Debian is a fine replacement that mostly works the same way.
Oh, yeah, that was a major inflection point in the community as well, as after that point there was no point in running
xf86config
xorgconfig
so it suddenly became impossible to shame the noobs who used that instead of editingXF86Config
by hand like Real Men.ALSA gained softmixing (and wide hardware support) kind of late, which is what made sound servers useful for a while – and also what led to the following truly hilarious, and quite puzzling bug for newbies.
If ARTS couldn’t claim exclusive control over the soundcard – because, say, XMMS had exclusive control over it through its ALSA output plugin – it didn’t play anything, but it did continue to buffer whatever you sent to it, and would begin to play it as soon as it could claim control over the soundcard. I learned that when I began using kopete (which, for a while, had the best Yahoo! Messenger support) on a fresh install. I hadn’t changed XMMS’ output plugin to ARTS, so none of the “Ping!“s made it to the sound card…
…until I stopped XMMS, at which point ARTS faithfully played every single Kopete alert it had received in the last hour or so (or however long its ring buffer was).
This was actually worse than it sounds – in this case it was just a particular configuration quirk, but the real problem was that not all software supported all sound servers, or at least not very well. E.g. gAIM (which later became Pidgin) supported ARTS but it was a little crashy, and in any case, ARTS support became mainstream among non-KDE software relatively late in the 3.x cycle. Even as late as 3.2, I think, it was just more or less a fact of life that you had one or two applications (especially games) where you kind of lived with the fact that they’d take over your soundcard and silence everything else for a while. Truly a golden age of desktop software :-).
Some things the younger generation of Linux users may not remember:
dd
orcat
from/dev/cdrom
to a file) and mount all of them, but not everyone had that kind of space.Actually, I loved that behavior and would use it deliberately to queue things. I resisted moving to ALSA for quite some time because I liked how things blocked. Oh well.
(I started with Linux in 2004 btw, not before. It was solidly ok, I think I dodged much of the pain people talk about/)
Well, sure, it was fun if it was the IM client that blocked. Not as fun when it was an actually important alert, or when the browser queued some earbleed noise from a Flash intro, or when you couldn’t listen to music or watch a movie until you quit the offending app.
ALSA softmixing STILL doesn’t work. I tried running a server-less setup a few years ago. Applications just took exclusive control anyway.
It works just fine, I still use it today. You might need to configure it though; distro configs based on PulseAudio tend not to enable it since they assume PA is doing it anyway.
/etc/asound.conf will define things based on
dmix
(for playback) anddsnoop
(for recording) if alsa mixing is enabled. and the pcm.default will refer to that pseudodevice instead of the hardware.I haven’t tried it in a long time (basically since
apulse
just didn’t cut it anymore :-) ) but back then, though late, it worked just fine.It’s been a while so I don’t remember the details, but I think what happened was that apulse wasn’t really working, so I used the built in ALSA support in Firefox (code is still there last I checked, just disabled in the default build config) which took exclusive control.
I don’t know how that’s handled post-PulseAudio or how well it works. But I am 100% sure it worked. The only reason I stopped using it was that PulseAudio became a dependency pretty much everywhere so yanking it out and dealing with the fallout was about as much trouble as using it.
I find this to be just not true. While Linux today is undoubtedly comprised of more complex subsystems like pipewire and systemd, it allows you to effortlessly use bluetooth headsets, play hiend games with comparable performance and even (and this was unthinkable back then) do music production with incredibly full featured software. Maybe the simplicity of yore was enjoyable but linux today is a lot more capable.
I think you have a pretty skewed picture and I’m happy it worked that well for you back then.
I certainly had linux on the desktop in the late 90s, but it just wasn’t great. Our shared computer pool at university (I started in 2003) worked perfectly fine, but it was curated hardware and some people put in some real effort for it.
I bought my first laptop in 2004, a friend had the predecessor model, that’s why I knew it was relatively ok for Linux… and yet I moved to FreeBSD because of a couple things (one of them wifi) it just wasn’t great if you wanted to have it “just work”[tm].
Compare to today, people were kinda surprised when I said that I have no sound on my desktop, although games run at 120 FPS out of the box with the 3070. Turns out it’s a commonly known problem with this exact mainboard chipset, and plugging in the only USB sound card I ever owned.. it just works. All I’m saying is that I have not had proper problems for about 10 years (which weren’t solved easily) - but earlier than like 15 years ago everything took a lot of time to get running smoothly… that’s my experience.
That’s the whole point. Linux on the Desktop was great once it was configured.
Getting it configured was the hard part. We even had “install parties” because most people could just not do that configuration themselves.
FWIW, I agree much more with your original post than with the comment I replied to.
I guess my main point is that while I’m not averse to configuring stuff, I’ve always held the view that you should be able to do it in a reasonable time with a modest amount of knowledge. And very often the drivers simply weren’t there, so without switching hardware you were just out of luck, and it was not rare.
And what makes you think that? Sounds a bit like an idle claim. ;)
20 years ago was 2004, not the 90s. I used Linux as my main OS back then. Was shortly before I had a long run of NetBSD. Double checked some old emails, messages.
Mine was on a system that was by a local hardware store owned brand. That NetBSD was installed on a computer from a government organization sale.
I do. Today audio burns through CPU cycles, is wonky, on some system out of the box has buffer underruns, randomly kills YouTube, comes out of my speakers rather than my (audio jacked, so not even bluetooth) when I reboot. I never used to have any audio problems back then. Not with games, not with Skype.
Meanwhile my games (and I played a lot of games back then being young) need stuff like gamemoderun to be playable. Back then Enemy Territory (with so many mods), Majesty, Neverwinter Nights, etc. as well worked out of the box.
Of course it’s my experience, and my view was shared by about basically everyone I know who ran Linux on the desktop back then. I didn’t say it was terrible or your overall point is wrong. I just don’t believe that many people thought that everything was just fine and worked by default for the majority.
Maybe I’m focusing too much on getting stuff to run at all (which sucks if anyone changed anything in the kernel or in general upstream), and you’re focusing too much on problems today. It’s never perfect ;)
Now that KDE is stable again, I think we’re just about back to where we were 20 years ago. Only it’s Zoom instead of Skype. And nVidia drivers are still buggy. :)
The more things change….
It is?
On Debian, at any rate!
How many fixes have they pushed out to 6 so far?
https://en.wikipedia.org/wiki/KDE_Plasma_6#Releases
Plus 6.2.1 and now 6.2.2.
I think they are up to 14 or 15 in 10 months. They are heading for 50% over a monthly release cycle.
To misquote Douglas Adams: “this must be some strange new usage of the word ‘stable’ that I’m not familiar with.”
We just had it for package managers instead
Did we? And if so, did it stop?
I don’t think “instead” is the correct term, when we now have Nix vs Snap vs Flatpak vs Docker Image vs traditional package managers.
Don’t forget AppImage. :-)
And GNUstep
.app
bundles.And there’s 0install as well, but nobody uses that.
[Comment removed by author]
I agree a bit, but I feel like Linux back then was generally more work - if nothing else, it required a lot more understanding of how it worked.
Very few people back then were running Linux in a VM, and they certainly weren’t using WSL or a container - most people had to install it on real hardware, and there was usually a bit of learning curve just getting the system to boot into Linux for the first time and getting drivers setup.
I’m currently running Debian on an old MacBook Pro, and it reminds me a lot of using Linux 20 years ago. Everything’s working - I can video chat with Microsoft teams, I have accellerated 3D graphics (with NVidia until a few months ago), etc., but it was work to get it up and running. Proprietary drivers had to be tracked down, some special kernel modules had to be built from source, some magic incantations had to be added to the kernel command line, etc.
Nowadays when I have to do that, it’s a real inconvenience - 20 years ago it was just expected.
Audio works better today than it ever has on Linux.
There is no real actual in-fighting regarding init systems today. There are three groups: those who just use systemd, those who have reasons to not use systemd and aren’t weird about it, and losers who no one cares about.
Actually, since you mention both tracking and snap, how many of the problems you have are just Ubuntu-specific, not modern desktop Linux specific?
That makes sense. I think the reason is the same as for RPM was back then though. There is that stuff that big companies introduce and the smaller players kind of have to adapt, because being able to pay means you can outpace them. Some people aren’t happy about that. It might be why they switch away from Windows for example. While I think there is a lot of people that fight some ideological fight and systemd is the target, I’d argue that even the “losers” will give you a reason. Whether it’s a good reason or not is a different question of course.
Audio and Video are my primary ones. I am mainly on Arch, assuming I made mistakes only to find out that when I get a work/client laptop with Ubuntu, etc. it also has issues, even though I am not sure they are related. Different OS, different issues.
Most recent: Manjaro. Thinkpad T14s. Out of the box. My stuff moves to a different screen simply because my monitors goes to sleep. Sometimes playing videos on YouTube freezes the screen it’s playing on for a couple of minutes. Switching my audio output does work sometimes, sometimes not.
I have had the freezing on Ubuntu (which was the standard by the company) before. Instead of stuff moving to the other screen I had instances, where I had graphical artifacts. And instead of audio output not switching when explicitly selecting it I had issues with it not switching when I start the system with headphones already plugged in.
20 years ago I was able to get stuff done without such issues.
I also don’t have issues on other OSs, not on Windows, not on OpenBSD. I checked during debugging.
I am not the only one with those issues, however fixes don’t work. No specific errors in journalctl/dmesg. People have been reporting these issues of course. Some had other causes. Some changed window manager, some switched hardware, some switched between wayland/xorg (both ways actually), etc.
I have hopes that these will be fixed eventually, but the whole point of the above was that for my use cases 20 years ago the average Linux distribution of the time did a better job of what I expect. Of course the story might be different for other people, but I don’t think I need to mention that.
I installed Xubuntu on my partner’s laptop a while ago and they’ve been very happy with the switch, save for when the time came to update.
They’ve been using this laptop from 2013 with 4GB of RAM and a Windows 10 installation on an HDD, which was atrociously slow. It took several minutes to reach the user select screen, and after logging in it was 5 more minutes for Windows to stop spinning the disk before the system was usable.
After trying to alleviate the pain a couple times I figured it was time to move the system to an SSD and, after figuring out my partner’s needs and with their consent, I also installed Xubuntu on the laptop. Haven’t heard any complaints other than the occasional snag with sleep/hibernation, but since the laptop rebooted in a couple seconds they weren’t a big issue.
The big pain point in my experience is still the updates: the latest dist-upgrade from 22.04 to 24.04 failed partway through for some reason I haven’t yet determined, and left them with an unbootable system. I had to come in with an Ubuntu USB and chroot into the installation to make it finish the upgrade and fix some packages that were uninstalled for some reason. Now the laptop works fine but they’re experiencing random freezes (probably due to the nouveau driver or some faulty hardware). I could probably fix it but we’re kind of using it as an excuse to get a newer and less bulky laptop, so I guess that worked out in our favour :)
The next laptop will also have Linux on it, but I’m gonna install an immutable distro this time.
I also recently had an issue with my NixOS laptop after an upgrade where it would freeze on high CPU loads and didn’t come back from sleep (my fault for setting the kernel version to latest, lol), and I urgently needed it to work for an event I was attending later that evening. I really appreciated that I could boot into the previous generation that still worked and resolve the issue later.
Maybe I’m just high on the Nix pill, but I think immutable distros are a huge improvement in usability for everyone using Linux. Things still fail from time to time, and being able to temporarily switch back to a system that still works until the issue is fixed is one of the missing pieces to bringing Linux to the masses, at least in my opinion.
That kind of rollback a bad upgrade thing can alternatively be handled at the filesystem level with, for example, zfs snapshots.
True, I’d just want it to be automated by default on every update.
“I started to always have two CD-ROMs with me: the latest Knoppix and Debian Woody.”, I to this day, always have a booting Ubuntu key to my keyring; you’ll never know.
I always used to have Knoppix with me too. And SystemRescueCd.
And I remember clicking through software on Knoppix and coming across BB. Having a terminal running with this playing was just great.
Me too! I was once in my university library with a broken Gentoo install and managed to save it before a deadline with a live USB that I only had on my keyring by coincidence. Now I always keep one on me (and no longer use Gentoo).
i’m on nixos and it’s ruined my linux experiences (For the better). I no longer have to search “How to X on Y” like “How to get pulse audio working on ubuntu” .. In nixos, you just set the right variables and carry on. I had no audio on my thinkpad and just copied the audio lines from my nixos config and it just worked.
Yes. The nice thing about NixOS is that you can make dramatic changes to your configuation without fear of breaking things or worry about leaving clutter behind. That’s hard to give up once you have enjoyed it!
PulseAudio is not that bad anymore. But, regardless, nixos-unstable already enabled PipeWire by default. Works well, latency is pretty low. CPU usage could be a bit lower, though.
back then, you didn’t just have epiphany, you had epiphany on gecko!!
This guy retro-Linuxes!
The more time went by, the more I realised that this state of mind was particularly toxic and ultimately disrespectful of the real needs of the people around us.
I’m using the word ‘we’ here because obviously, I also had this approach at the time (admittedly, a few years later, being a bit younger), but I’m a bit ashamed of the approach I had at the time and today I have a deep rejection for this way of behaving towards a public who often use IT tools for specific needs and who shouldn’t become dependent on a certain type of IT support that isn’t necessarily available elsewhere.
Who are we to put so much pressure on people to change almost their entire digital environment? Even more so at a time when tools were not as widely available online as they are today.
In short, I’m quite fascinated by those who* are proud to have done this at the time, and still are today, even in the name of ‘liberating’ (often in spite of themselves) users who don’t really understand the ins and outs of such a migration.
[*] To be clear, I’m not convinced, given the tone of the blogpost, that the author of this blog post does!
Can we please stop throwing around the word “toxic” for things are totally normal human interactions. Nobody is obliged to do free work for a product they neither bought themselves nor use nor like.
The “or never talk to me about your computer anymore” if you don’t run it the way I tell you to part, is, IMO, not normal or nice. I’m not sure I’d have called it toxic, but I’d have called it unpleasant and insensitive.
Of course nobody is obliged to do free work for a product they don’t purchase, use or like. That’s normal. But you can express sympathy to your friends and family who are struggling with a choice they made for reasons that seemed good or necessary to them, even if you don’t agree that it was a good choice. It’s normal listen to them talk about their challenges, etc., without needing to solve them yourself. You can even gently remind them that if they did things a different way, you could help but that you don’t understand their system choice well enough to help them meet their goals with it.
The problem is telling a friend or loved one not to talk to you about their struggles. Declining to work on a system you don’t purchase, use or like, is of course normal and not a problem.
I’ve used Linux on my own machines exclusively since 1999, and when I get asked to deal with computer problems (that aren’t related to hardware, networking or “basic computer literacy” skills) I can’t help with, I’ll usually say something along the lines of “you know, you actually probably know more about running a Windows machine than I do” - which doesn’t usually get interpreted as uncaring or insulting, and is also generally true.
If you buy a car that needs constant repairs you get rid of it and buy something else that does not require it. There is no need to sit with family/friends and discuss their emotianal journey working with a Windows computer. it is a thing. If it is broken have it repaired or buy something else.
You might. Or you might think that even though you’ve had to fix door closing sensor on that minivan’s automatic door 6 times now, no other style of vehicle meets your family’s current needs, and while those sensors are known to be problematic across the entire industry, there’s not a better move for you right now. And the automatic door closing function is useful to you the 75+% of the time that it works.
And you still might vent to your friend who’s a car guy about how annoying the low quality of the sensor is, or about the high cost of getting it replaced each time it fails.
Your friend telling you “don’t talk to me about this unless you suck it up and get a truck instead” would be insensitive, unpleasant and might even be considered by some to be toxic.
It’s not an emotional journey. You’re not asking your friend to fix it. You’re venting. A normal response from the friend would be “yeah, it’s no fun to deal with that.” Or “I’d know how to fix that on a truck, but I have no idea about a minivan.”
–
edit to add: For those who aren’t familiar with modern minivans, they have error prone sensors on the rear doors that are intended to prevent them from closing on small fingers. To close the doors when those fail, it’s a cumbersome process that involves disabling the automatic door function from the driver’s area with the car started, then getting out and closing the door manually. It’s a pain, and if your sensor fails and your family is such that you use the rear seats regularly, you’ll fix it if you value your sanity.
“Tired of being an unpaid Microsoft support technician…” - no, it’s not venting.
As a sometimes erstwhile unpaid support technician, I vehemently disagree.
I fully admit that sometimes I stepped into that unpaid support technician role when I could have totally, in a kind, socially acceptable way, said “Wow, it’s miserable that your computer broke. You should talk to {people you bought it from}. I can tell you a lot about computing in general, but they’ll know a lot more about Windows than I would.”
And it would’ve been OK, because the people telling me about their problems were mostly venting, not really looking for a solution from me.
But as a problem solver, I’m conditioned to think that someone telling me about an issue is looking for a solution from me. That’s not so; it’s my bias and orientation toward fixing this kind of thing that makes me think so.
Thank you, you’ve put into much better words what I wanted to say than the adjective ‘toxic’, which was the only one I had to hand when I wanted to describe all this.
How on hell could it be considered as toxic to refuse to support something which is against your values, which requires you a lot of work, which is unpaid while still offering to provide a solution to the initial problem ?
All the people I’ve converted to Linux were really happy for at least several years (because, of course, I was not migrating someone without a lot of explanations and without studying their real needs).
The only people who had problem afterward were people who had another “unpaid microsoft technician” doing stuff behind my back. I mean I had been called by a old lady because her Linux was not working as she expected only to find out that one of her grand-children had deleted the Linux partition and did a whole new Windows XP install without any explanation.
I think there are three aspects to this:
First of all it is obviously your choice whether you want to give support for a system you don’t enjoy and may not have as much experience with. Especially when you could expect the vendors of that system to help, instead of you.
But the second part is how you express this: You are - after all - the expert getting asked about providing support. And so your answer might lead them down a route where they choose linux, even though it is a far worse experience for the requirements of the person asking for help.
The last point comes from the second: You have to accept that installing linux is for those people not something they can support on their own. If they couldn’t fix their windows problems, installing linux will at best keep it to the same level. Realistically they now have n+1 problems. And now they are 100% reliant upon you - the single linux expert they actually know for their distribution. And if you’re not there, they are royally fucked with getting their damn printer running again. Or their nVidia GPU freezing the browser. Or Teams not working as well with their camera. In another context you could say you secured your job. And if only because updates on windows are at least 100% happening, which is just not true on linux.
I have seen people with a similar attitude installing rolling releases for other people while disabling updates over more than 6 months, because they didn’t have the time to care about all the regular breakage. And yes that includes the browser.
And the harsh truth is that for many people that printer driver, MS Office, Teams + Zoom and Camera is the reason they have this computer in the first place. So accepting their needs can include “Sorry I am not able to help with that” while also accepting that even mentioning linux to them is a bad idea.
If I had that attitude towards my wife I would end up very principled and very single.
(Context: she is blind and needs to use Windows for work. I also have to use Windows for work)
[Comment removed by author]
I also agree with your interpretation a lot more but I doubt the author would mean that quite so literally.
Why on earth should the author be doing free tech support for people on an OS that they didn’t enjoy using?
Because it’s a nice thing to do for your family and friends and they’ll likely reciprocate if you need help with something different. Half of the time when I get a “tech support” call from my aunt or grandparents, it’s really just to provide reassurance with something and have a nice excuse to catch up.
Maybe we had different experiences.
Mine was of wasting hours trying to deal with issues with a commercial OS because, despite paying for it, support was nonexistent.
One example: Dell or Microsoft (unsure of the guilty party) pushed a driver update that enabled power saving on WiFi idle by default. That combined with a known bug on my MIL’s WiFi chipset, where it wouldn’t come out of power saving mode. End result was the symptom “the Internet stops working after a while but comes back if I reboot it”.
Guess how much support she got from the retailer who sold her the laptop? Zip, zero, zilch, nada.
You’re not doing free technical support for your relatives, really: you’re doing free technical support for Dell, and Microsoft, and $BIG_RETAILER.
When Windows 11 comes around (her laptop won’t support it) I’m going to upgrade the system to Mint like the rest of my family :) If I’m going to donate my time I’d rather it be to a good cause.
Yes, that was never my experience, and if it had been I would be inclined to agree with you. These days I hear more of “why did I run out of iCloud storage again” or “did this extortion spammer actually hack my email,” which I find less frustrating to answer :)
Yeah it doesn’t matter for generic tech support, in my experience, what OS they’re running.
It’s just the rabbit holes where it’s soul destroying.
Another example was my wife’s laptop. She was a Dell XPS fan for years, and ran Windows. Once again a bad driver got pushed, and her machine took to blue-screening every few minutes. We narrowed it down to the specific Dell driver update. Fixed it by installing Mint :)
Edit: … and she’s now a happy Ryzen Framework 13 user. First non-XPS she’s owned since 2007.
Ugh. It’s not “toxic” to inform people of your real-world limitations.
My brother-in-law is a very experienced mechanic. But there are certain car brands he won’t touch because he doesn’t have the knowledge, equipment, or parts suppliers needed to do any kind of non-trivial work on them. If you were looking at buying a 10-year-old BMW in good shape that just needs a bit of work to be road-worthy, he would say, “Sorry, I can’t help you with that, I just don’t work on those. But if you end up with a Lexus or Acura, maybe we could talk.” He knows from prior experience that ANY time spent working on a car he has no training on would likely either result in wasted time or painting himself into an expensive corner, and everyone involved getting frustrated.
Similarly, my kids would prefer to have Windows laptops, so that they could play all the video games their peers are playing. However, I just simply don’t know how to work on Windows. I don’t have the skills or tools. I haven’t touched Windows in 20 years and forgot most of what I knew back then. I don’t know how to install software (does it have an app store or other repository these days?), I don’t know how to do back ups, I don’t know how to keep their data safe, I don’t know how to fix a broken file system or shared library.
But I can do all of these things on Linux, so they have Linux laptops and get along just fine with them.
Edit: To color this, when I was in my 20’s, I tried very hard to be “the computer guy” to everyone I knew, figuring that it would open doors for me somehow. What happened instead was that I found myself spending large amounts of my own free time trying to fix virus-laden underpowered Celerons, and either getting nowhere, or breaking their systems further because they were already on the edge. Inevitably, the end result was strained (or broken) relationships. Now, when I do someone a favor, I make sure it is something that I know I can actually handle.
But he didn’t force anyone, he clearly says that if those people didn’t want his help, he could just leave it the way it was. To me that’s reasonable - you want my help, sure, but don’t make me do something I’m personally against. It’s like, while working in a restaurant, being asked to prepare meat dishes when being a vegetarian, except that my example is about work and his story is about helping someone, so there’s even less reason to do it against his own beliefs.
From my experience being an unpaid support technician for friends and family, that’s the only reasonably approach. I had multiple situations when people called me to fix the result of someone else’s “work” and expected me to do it for free. It doesn’t work that way. Either I do it for free on my own terms, or you pay me the market rate.
Some examples I remember offhand. In one instance, I tried to teach a person with a malware-infested Windows some basic security practices, created an unprivileged account, and told them how to run things as administrator if they needed to install programs and so on. A few weeks later I was called to find the computer malware-infested again, because they asked someone else to help and he told them that creating a separate administrator account was “nonsense” and gave the user account administrator rights. Well, either you trust me and live more or less malware-free or you trust that guy and live with malware.
In another instance, I installed Linux for someone and put quite some effort into setting things up the way the person wanted. Some time later, they wanted some game but called someone else instead of me to help install it (I almost certainly would be able to make it run in Wine). That someone wiped out all my work and installed Windows to install that game.
People expecting you to be their personal IT team for free just because you “know computers” is just as disrespectful. I don’t think it’s unfair to tell people “no if you want help with your windows system you need to pay someone who actually deals with such things”
This is looking at the things with the current context. Windows nowadays is much more secure and you can basically leave a Windows installation to a normal user and not expect it to explode or something.
However at the time Windows was still the kind of operational system that if you put it on internet without the proper updates, it would be instantly be infected by malware 1. Most users run with admin accounts and it was really easy to get a malware installed by installing a random program, because things like binaries signatures didn’t exist yet. There were also no anti-malware installed by default in Windows, so unless you had some third-party anti-malware installed your computer could quickly become infested with malware. And you couldn’t also just refresh your installation by clicking in one button, you would need to actually format and reinstall everything (that would be annoying because drivers were much less likely to be included in the installation media, so you would need to have another computer that had an internet connection since the freshly installed Windows wouldn’t have any way to connect to internet).
At that time, it would make much more sense to try to convince users to switch to Linux. I did this with my mom for example, switching her computer to Linux since most things she did was accessing the internet. Migrating her to use Linux reduced the amount of support I had to do from once a week to once a month (and instead of having to fix something, it would be in most cases just to update the system).
It should be added that if you helped someone once with his Windows computer, you were considered responsible of every single problem happening on that computer afterward.
In some cases, it was even very strong problem (I remember a computer which was infected by a malware that dialed a very expensive line all the time. That family had a completely crazy phone bill and they had no idea why. Let assure you that they were really happy with Linux for the next 3 or 4 years)
Very much that. It was never the user fault, even if you left the computer in pristine condition, if they had an issue in the same week it was your fault and you would need to fix that.
At the same time, however, it was also much more likely that you needed to deal with an application that would only run on windows, a file format that could only be roundtripped by such an application, a piece of hardware that only worked on windows (remember winmodems? scanners sucked, too, and many printers were windows GDI only), etc.
So convincing someone to use Linux was more likely to cause them a different kind of pain.
Today, most hardware works reasonably with Linux. Printers need to work with iPhones and iPads, and that moved them off the GDI specific things that made them hard to support under Linux. Modems are no longer a thing for most people’s PCs. Proton makes a great many current games work with Linux. Linux browsers are first class. And Linux software handles most common file formats, even in a round trip, very well. So while there’s less need to switch someone to Linux, they’re also less likely to suffer if you do.
That said, I got married in 2002. Right after I got married, I got sent on a contract 2500 miles away from home on a temporary basis. My wife uses computers for office software, calendar, email, web browsing and not much else. She’s a competent user, but not able to troubleshoot very deeply on her own. Since she was working a job she considered temporary (and not career-track) at home, she decided to travel for that contract with me, and we lived in corporate housing. Her home computer at the time was an iMac. It wasn’t practical to bring that and we didn’t want to ship it.
The only spare laptop I had to bring with us so she had something to use for web browsing and job hunting on the road didn’t have a current enough to be trustworthy windows license, so I installed Red Hat 7.3 (not enterprise!) on there for her. She didn’t have any trouble. She’d rather have had a Mac, but we couldn’t have reasonably afforded one at the time. It went fine, but I’d never have dared to try that with someone who didn’t live with me.
Yes, but it really depends on the kinda of user. I wouldn’t just recommend Linux unless I knew that every needs from the user would fit in Linux. For example, for my mom, we had broadhand Ethernet at the time, our printer worked better on Linux than Windows (thanks CUPS!), and the remaining of her tasks were basically done via web browser.
It also helped that she lived with me, for sure ;).
2004… I looked it up and it seems that I got out of Desktop Linux end of 2005 never to look back again. It was fun and I learned a lot but not particularly eager to go back.
So what do you use instead?
I bought a 12” iBook and never touched the Desktop machine again until I at some point 6-12 months later sold it or something. I don’t remember.
Huh. OK, makes sense. Thanks.