Threads for ploum

    1. 9

      I know that it sounds a lot like autopromotion but for those wanting to be more in the terminal (like myself), I really recommend my own web browser: offpunk

      https://offpunk.net/

      I use it as a web/gemini/gopher browser, as an RSS reader, as a “to read” manager (it replaced Pocket) and as a bookmarks manager.

      The only non-terminal tools I use nowadays are:

      • Firefox for JS websites (such as lobste.rs, banking, etc)
      • Signal
      • Rhythmbox to play mp3
      1. 2

        I really like the look of this, gonna check it out! Thanks for sharing - it’s a refreshing change from most other TUI web browsers.

    2. 9

      While I don’t fully live in the terminal, I haven’t been using a GUI file manager for over 12 years now. I noticed that I automatically structure my files and directories much cleaner, and I tend to find even old stuff relatively quickly as you are not tempted to just dump stuff somewhere. This approach is highly recommended from my experience.

      1. 4

        Same here! There’s dozens of us!
        No terminal browser either, but I have a fast travel setup.

        I have custom keyboard bindings in my shell so alt+c opens my cd history, and alt+shift+c is recursive dir picker under the $CWD (which pushes to the cd hisory, making the dir available via alt+c next time).

        For ls I only ever use my l alias*. Even that is rare and mostly relegated to being used for checking file permissions and other metadata. Most of the time either I know what’s in the directory, tab completion is enough, or I’m in my editor using the file picker.

        Finding things is not an issue either, most things are well organized. I don’t have a Desktop. I changed my browser to save files in /tmp by default to avoid the Downloads mess.

        *: ls --classify=auto --color=auto --human-readable -l haven’t updated in a long time so some of these might be defaults.

      2. 2

        Do you use a terminal file manager? Just curious.

        1. 4

          Good point! No, I only use the normal POSIX utilities.

          1. 2

            same here: using only ls/cd/cp/mv/rm. Only exotic tools: fdfind and ripgrep

            1. 1

              On occasion I find oil.nvim very handy for mass renames.

    3. 8

      I think 20 years ago the Linux Desktop worked better than today. KDE 3 was really nice. Audio still worked. Wifi wasn’t as much of a must have yet. There were some companies porting games to Linux. Distros weren’t constantly tracking and trying to monetize you. There was no in-fighting regarding init systems. You didn’t have that mess of package manger + snap + flatpak. Third party stuff usually compiled per default with ./configure && make && make install. Even Skype just works. The community was a lot less made up of self-promoters. Instead you got quick competent help, just like as a Windows user at the time.

      People’s main complaint was that there wasn’t any good video editing and graphics software. Wine worked okay-ish. (for some stuff one used commercial offerings)

      The only thing that was a bit messy depending on the distribution were NVIDIA drivers. Maybe Flash Videos, but I think they actually worked.

      It even was so good that without much technical background or even English skills one could not just use Linux, but get an old computer, install NetBSD and happily use it on the desktop.

      I think the average today (let’s say Ubuntu and Manjaro for example) is that you’ll spend months to glue something up that you can maybe live with. I think parts of the Linux desktop is using design principles that are technically nice, but give users a hard time. An example is that creating something like a short cut used to be easy in desktop environments. Today there is a standard for applications, which is nice, but bugs the user that just wants to create a shortcut. I am not sure what happened to GUIs for creating .desktop files?

      1. 19

        I don’t know if it’s fair to say that it was better. Lots of modern problems have their back-in-the-day equivalents. You didn’t have to fight with package manager + snap + flatpak but pre-yum RPM was a pain. Lots of third-party stuff was compiled with imake and haha good luck running it on anything except Red Hat Linux 9.0 or whatever. Skype just worked insofar as the microphone worked, which wasn’t always a given.

        What is disappointing and, I think, fair to say, is that we don’t have twenty years’ worth of bugfixes and improvements in today’s desktop stack. For example, there are very few components in Plasma 6 today that haven’t been rewritten since KDE 3, to the point where Plasma 6 is a bit of a misnomer – much of it is actually on its second major release, and it shows. We are, at best, about 10-15 years into the lifetime of current desktop technologies which is why, adjusting for the significantly increased complexity, in terms of stability and capabilities, we’re not much further than where we were in 2007 or so.

        I think this ritual burning of all existing technology (wasn’t the first time it happened, either; Gnome 2 and KDE 3 both significantly departed from their predecessors) came at a particularly bad time, because it roughly coincided with the period in which lots of people lost interest in desktop development.

        20-25 years ago cool kids dreamed of writing a better window manager, or file manager, or browser, because that’s what was hot. 10-15 years ago cool kids were writing phone apps; if they used Linux, they wrote web apps. So there weren’t as many fresh ideas (and fresh heads) going into desktop development. That made desktop development both slower, as platform complexity grew in every aspect, from font rendering to hardware-accelerated drawing, and much quicker than spare development time, and more divisive.

        1. 6

          What is disappointing and, I think, fair to say, is that we don’t have twenty years’ worth of bugfixes and improvements in today’s desktop stack. For example, there are very few components in Plasma 6 today that haven’t been rewritten since KDE 3, to the point where Plasma 6 is a bit of a misnomer – much of it is actually on its second major release, and it shows.

          You’ve put this so well, thank you

        2. 1

          but pre-yum RPM was a pain

          It was. But that’s why I simply didn’t use RPM based systems. Never understood why people like to go through that pain.

          Lots of third-party stuff was compiled with imake and haha good luck running it on anything except Red Hat Linux 9.0 or whatever.

          Huh? What software?

          Skype just worked insofar as the microphone worked, which wasn’t always a given.

          As mentioned, never had a problem with that ever. I mean it. I met my girlfriend online during that time. Skype calls even very long ones never had a problem and she was the first person I’d use Skype with. Since that’s how I got to know my girlfriend I know that time vividly. Meanwhile I constantly run into oddities with Discord, Slack and Teams, if they even work at all. Again, not even using Bluetooth. Just an audio jack, so the setup should be simple.

          I think this ritual burning of all existing technology (wasn’t the first time it happened, either; Gnome 2 and KDE 3 both significantly departed from their predecessors) came at a particularly bad time, because it roughly coincided with the period in which lots of people lost interest in desktop development.

          Not intending to burn existing technology. I have no reason to. Just stating that things that used to work don’t work anymore. Can have lots of (potentially good) reasons. I just also think that the idea that software magically is better today somehow is wrong. And a lot of claims “you just remember wrong” are very very wrong. Same thing with games by the way. I put that to the test. Claims like “You are just older and as a child things are more exciting” are simply wrong in my instances. Just like going back to old utilities. I have no interest of putting new software down in any way. I hate that old vs new software farce. It’s simply whether stuff works or doesn’t.

          I’d argue that there is currently not much going on on the Linux desktop side. There are good reasons. People don’t use desktops as much anymore. And they aren’t their main focus. People have phones, apps, smart tvs, etc. Lots of people that would have run Linux back in the day now run macOS. It is a known fact that less people work on desktop environments and when stuff gets more complex, one needs to support a lot more things. On top of that all the development moved into the browser over the last two decades. People don’t really create desktop applications and by extension desktop environments anymore.

          So of course if developer focus shifts other stuff will be better in the open source world. Non-tech people can run LLMs on their home PCs when they invest 15 minutes. People share their video collection. There is open source social media that is actually used. Graphics software is a ton better. With Godot there is a really great game engine. People create programming languages. LLVM is amazing. There are finally hobby OSs (Serenty, etc.) again. All of these are really great.

          Just that my personal desktop experience was better back then and I think a really big reason for that is that the focus of a desktop was more narrow.

          1. 4

            It was. But that’s why I simply didn’t use RPM based systems. Never understood why people like to go through that pain.

            You can always not use Snap or Flatpak today. Doesn’t mean no one does, just like it didn’t mean no one used RPM back in the day, too. I don’t use either and I’m definitely happier with my packaging experience than back in 2004-2006-ish (which I’m guessing is the period you mainly have in mind, based on Skype?). (Edit:) I didn’t use RPM-based distros back then, either, so it’s not because of yum :-).

            Huh? What software?

            The ones I remember most vividly are Maya, Matlab, and… pretty much any GIS software at the time. Same as above – maybe you didn’t use them, doesn’t mean no one needed them. Any desktop will work flawlessly if you only pick applications that happen to work flawlessly :-).

            1. 1

              That makes sense. You are right, but you brought up RPM, that’s why my response was about how I never understood it. :-)

              Probably I got lucky with software then. I wasn’t use any of this. I got into GIS a bit later, so looks like I just avoided that or package mangers abstracted it away for me.

          2. 1

            I think this ritual burning of all existing technology …

            Not intending to burn existing technology.

            I think x64k was referring to the desktop environment developers as “burning … existing technology” by rewriting their components rather than polishing the old, working ones that you liked.

        3. 1

          For example, there are very few components in Plasma 6 today that haven’t been rewritten since KDE 3, to the point where Plasma 6 is a bit of a misnomer – much of it is actually on its second major release,

          The interesting thing is that the macOS has still been using the same core tech for the desktop environment during that time period (Quartz, Cocoa, etc.), though recently things like SwiftUI were introduced. I wonder if Apple just got it almost right the first time because they had more experience than the KDE/GNOME folks or whether open source desktops are more affected by every generation wanting to leave their mark/brush up their resumés.

          Though I think that an important difference between KDE and GNOME is that KDE development is also more driven by Qt. At some point the owner (I think it was still Troll Tech at the time) of Qt said: Qt Widgets are now legacy and won’t see new development, and KDE had to rebase on QML/Quick, resulting in Plasma.

          1. 1

            I wonder if Apple just got it almost right the first time because they had more experience than the KDE/GNOME folks or whether open source desktops are more affected by every generation wanting to leave their mark/brush up their resumés.

            It’s a combination of factors, really. Nautilus, for example, was literally written by ex-Apple people, and had many similarities both to file managers of that era and to Finder in particular. So there was certainly no shortage of people to get it right the first time.

            IMHO a bigger factor, especially in the last 10-15 years, was the FOSS desktop development is a lot more open-loop. People come up with designs and they just run with them, which is a lot easier to do when you don’t have to worry about supporting multiple install bases, paying customers ditching you and so on.

            That’s very useful in many ways but one unfortunate side-effect is that, for all their combative attitude towards closed platforms, major FOSS desktop projects relentlessly chase every fad in closed platforms, too, including the ones that just don’t make sense, or in interpretations that just don’t make sense, like app stores (and I’m not referring to Flathub here; ironically, I think that’s the only platform of this sort that actually makes some sense, at least there’s a special packaging and distribution technology behind it; I’m thinking more of things like the Ubuntu Software Center). App store-like platforms could fulfill a very useful social role, serving e.g. as platforms for donations, bug bounties, feedback etc. – but instead they act just like their closed-source counterparts, with nothing but votes and ratings, to the point where they’re just Synaptic with votes and weird package management bugs.

      2. 10

        ~20 years ago (my memories might be slightly off), desktop Linux had recently gained the ability to automatically write your X11 modelines, but I still had to manually write my touchpad config. For audio, ALSA was the thing, but also ARTS was a thing, and also ESD was a thing, and OSS was still a thing, and audio worked if you only had one program or sound server having exclusive control of your sound device because anything else was a slow descent into madness.

        My goggles are absolutely not rose-tinted, and I’ll recommend a current-day bog standard Ubuntu install any day of the week because of the sheer size of the user base and the amount of info/software targetted towards that ecosystem. And if you are tired of Canonical going their own way every other year, Debian is a fine replacement that mostly works the same way.

        1. 5

          ~20 years ago (my memories might be slightly off), desktop Linux had recently gained the ability to automatically write your X11 modelines

          Oh, yeah, that was a major inflection point in the community as well, as after that point there was no point in running xf86config xorgconfig so it suddenly became impossible to shame the noobs who used that instead of editing XF86Config by hand like Real Men.

          For audio, ALSA was the thing, but also ARTS was a thing, and also ESD was a thing, and OSS was still a thing, and audio worked if you only had one program or sound server having exclusive control of your sound device because anything else was a slow descent into madness.

          ALSA gained softmixing (and wide hardware support) kind of late, which is what made sound servers useful for a while – and also what led to the following truly hilarious, and quite puzzling bug for newbies.

          If ARTS couldn’t claim exclusive control over the soundcard – because, say, XMMS had exclusive control over it through its ALSA output plugin – it didn’t play anything, but it did continue to buffer whatever you sent to it, and would begin to play it as soon as it could claim control over the soundcard. I learned that when I began using kopete (which, for a while, had the best Yahoo! Messenger support) on a fresh install. I hadn’t changed XMMS’ output plugin to ARTS, so none of the “Ping!“s made it to the sound card…

          …until I stopped XMMS, at which point ARTS faithfully played every single Kopete alert it had received in the last hour or so (or however long its ring buffer was).

          This was actually worse than it sounds – in this case it was just a particular configuration quirk, but the real problem was that not all software supported all sound servers, or at least not very well. E.g. gAIM (which later became Pidgin) supported ARTS but it was a little crashy, and in any case, ARTS support became mainstream among non-KDE software relatively late in the 3.x cycle. Even as late as 3.2, I think, it was just more or less a fact of life that you had one or two applications (especially games) where you kind of lived with the fact that they’d take over your soundcard and silence everything else for a while. Truly a golden age of desktop software :-).

          Some things the younger generation of Linux users may not remember:

          • Wine ran a bunch of games surprisingly well, but installing games that came on several CDs (I remember Jedi Academy) involved an interesting gimmick. I don’t recall if this was in the CD-ROM (ever seen one of those :-)?) drivers or at the VFS layer but in any case, pre-2.6 kernels took real care of data integrity so, uh, you couldn’t eject the CD-ROM tray if the CD was mounted. And you couldn’t unmount it because the installer process was using it. At some point there was a userspace tool that took care of that (I forgot the name, this was 20 years ago after all). Before that, yep, you had to compile your own kernel with some cool patches. If you had the hard-drive space it was easier to rip the CDs (that was kind of magic; you could just dd or cat from /dev/cdrom to a file) and mount all of them, but not everyone had that kind of space.
          • If you had a winmodem, you were usually doomed. However, “real” modems were really expensive and, due to the enormous success of winmodems, they got kind of difficult to find by the early ’00s.
          • Hardware support lag was a lot more substantial than today. People really underestimate how important community growth turned out to be. When I finally got a “real” computer I ran it with the hard drive in compatibility mode because it took forever for Linux to get both proper S-ATA support and support for… I think OCH5 it was?
          1. 2

            This was actually worse than it sounds –

            Actually, I loved that behavior and would use it deliberately to queue things. I resisted moving to ALSA for quite some time because I liked how things blocked. Oh well.

            (I started with Linux in 2004 btw, not before. It was solidly ok, I think I dodged much of the pain people talk about/)

            1. 3

              Well, sure, it was fun if it was the IM client that blocked. Not as fun when it was an actually important alert, or when the browser queued some earbleed noise from a Flash intro, or when you couldn’t listen to music or watch a movie until you quit the offending app.

          2. 1

            ALSA gained softmixing (and wide hardware support) kind of late, which is what made sound servers useful for a while – and also what led to the following truly hilarious, and quite puzzling bug for newbies.

            ALSA softmixing STILL doesn’t work. I tried running a server-less setup a few years ago. Applications just took exclusive control anyway.

            1. 1

              It works just fine, I still use it today. You might need to configure it though; distro configs based on PulseAudio tend not to enable it since they assume PA is doing it anyway.

              /etc/asound.conf will define things based on dmix (for playback) and dsnoop (for recording) if alsa mixing is enabled. and the pcm.default will refer to that pseudodevice instead of the hardware.

            2. 1

              I haven’t tried it in a long time (basically since apulse just didn’t cut it anymore :-) ) but back then, though late, it worked just fine.

              1. 1

                It’s been a while so I don’t remember the details, but I think what happened was that apulse wasn’t really working, so I used the built in ALSA support in Firefox (code is still there last I checked, just disabled in the default build config) which took exclusive control.

                1. 1

                  I don’t know how that’s handled post-PulseAudio or how well it works. But I am 100% sure it worked. The only reason I stopped using it was that PulseAudio became a dependency pretty much everywhere so yanking it out and dealing with the fallout was about as much trouble as using it.

      3. 7

        I find this to be just not true. While Linux today is undoubtedly comprised of more complex subsystems like pipewire and systemd, it allows you to effortlessly use bluetooth headsets, play hiend games with comparable performance and even (and this was unthinkable back then) do music production with incredibly full featured software. Maybe the simplicity of yore was enjoyable but linux today is a lot more capable.

      4. 7

        I think you have a pretty skewed picture and I’m happy it worked that well for you back then.

        I certainly had linux on the desktop in the late 90s, but it just wasn’t great. Our shared computer pool at university (I started in 2003) worked perfectly fine, but it was curated hardware and some people put in some real effort for it.

        I bought my first laptop in 2004, a friend had the predecessor model, that’s why I knew it was relatively ok for Linux… and yet I moved to FreeBSD because of a couple things (one of them wifi) it just wasn’t great if you wanted to have it “just work”[tm].

        Compare to today, people were kinda surprised when I said that I have no sound on my desktop, although games run at 120 FPS out of the box with the 3070. Turns out it’s a commonly known problem with this exact mainboard chipset, and plugging in the only USB sound card I ever owned.. it just works. All I’m saying is that I have not had proper problems for about 10 years (which weren’t solved easily) - but earlier than like 15 years ago everything took a lot of time to get running smoothly… that’s my experience.

        1. 3

          That’s the whole point. Linux on the Desktop was great once it was configured.

          Getting it configured was the hard part. We even had “install parties” because most people could just not do that configuration themselves.

          1. 1

            FWIW, I agree much more with your original post than with the comment I replied to.

            I guess my main point is that while I’m not averse to configuring stuff, I’ve always held the view that you should be able to do it in a reasonable time with a modest amount of knowledge. And very often the drivers simply weren’t there, so without switching hardware you were just out of luck, and it was not rare.

        2. 2

          I think you have a pretty skewed picture

          And what makes you think that? Sounds a bit like an idle claim. ;)

          I certainly had linux on the desktop in the late 90s

          20 years ago was 2004, not the 90s. I used Linux as my main OS back then. Was shortly before I had a long run of NetBSD. Double checked some old emails, messages.

          but it was curated hardware

          Mine was on a system that was by a local hardware store owned brand. That NetBSD was installed on a computer from a government organization sale.

          Compare to today, people were kinda surprised when I said that I have no sound on my desktop

          I do. Today audio burns through CPU cycles, is wonky, on some system out of the box has buffer underruns, randomly kills YouTube, comes out of my speakers rather than my (audio jacked, so not even bluetooth) when I reboot. I never used to have any audio problems back then. Not with games, not with Skype.

          games run at 120 FPS

          Meanwhile my games (and I played a lot of games back then being young) need stuff like gamemoderun to be playable. Back then Enemy Territory (with so many mods), Majesty, Neverwinter Nights, etc. as well worked out of the box.

          1. 3

            Of course it’s my experience, and my view was shared by about basically everyone I know who ran Linux on the desktop back then. I didn’t say it was terrible or your overall point is wrong. I just don’t believe that many people thought that everything was just fine and worked by default for the majority.

            Maybe I’m focusing too much on getting stuff to run at all (which sucks if anyone changed anything in the kernel or in general upstream), and you’re focusing too much on problems today. It’s never perfect ;)

      5. 2

        Now that KDE is stable again, I think we’re just about back to where we were 20 years ago. Only it’s Zoom instead of Skype. And nVidia drivers are still buggy. :)

        The more things change….

        1. 2

          Now that KDE is stable again,

          It is?

          1. 2

            On Debian, at any rate!

            1. 1

              How many fixes have they pushed out to 6 so far?

              https://en.wikipedia.org/wiki/KDE_Plasma_6#Releases

              Plus 6.2.1 and now 6.2.2.

              I think they are up to 14 or 15 in 10 months. They are heading for 50% over a monthly release cycle.

              To misquote Douglas Adams: “this must be some strange new usage of the word ‘stable’ that I’m not familiar with.”

      6. 2

        There was no in-fighting regarding init systems

        We just had it for package managers instead

        1. 2

          Did we? And if so, did it stop?

          I don’t think “instead” is the correct term, when we now have Nix vs Snap vs Flatpak vs Docker Image vs traditional package managers.

          1. 0

            Don’t forget AppImage. :-)

            And GNUstep .app bundles.

            And there’s 0install as well, but nobody uses that.

      7. 2

        I agree a bit, but I feel like Linux back then was generally more work - if nothing else, it required a lot more understanding of how it worked.

        Very few people back then were running Linux in a VM, and they certainly weren’t using WSL or a container - most people had to install it on real hardware, and there was usually a bit of learning curve just getting the system to boot into Linux for the first time and getting drivers setup.

        I’m currently running Debian on an old MacBook Pro, and it reminds me a lot of using Linux 20 years ago. Everything’s working - I can video chat with Microsoft teams, I have accellerated 3D graphics (with NVidia until a few months ago), etc., but it was work to get it up and running. Proprietary drivers had to be tracked down, some special kernel modules had to be built from source, some magic incantations had to be added to the kernel command line, etc.

        Nowadays when I have to do that, it’s a real inconvenience - 20 years ago it was just expected.

      8. 2

        Audio still worked.

        Audio works better today than it ever has on Linux.

        There was no in-fighting regarding init systems.

        There is no real actual in-fighting regarding init systems today. There are three groups: those who just use systemd, those who have reasons to not use systemd and aren’t weird about it, and losers who no one cares about.

        Actually, since you mention both tracking and snap, how many of the problems you have are just Ubuntu-specific, not modern desktop Linux specific?

        1. 1

          There is no real actual in-fighting regarding init systems today. There are three groups: those who just use systemd, those who have reasons to not use systemd and aren’t weird about it, and losers who no one cares about.

          That makes sense. I think the reason is the same as for RPM was back then though. There is that stuff that big companies introduce and the smaller players kind of have to adapt, because being able to pay means you can outpace them. Some people aren’t happy about that. It might be why they switch away from Windows for example. While I think there is a lot of people that fight some ideological fight and systemd is the target, I’d argue that even the “losers” will give you a reason. Whether it’s a good reason or not is a different question of course.

          Actually, since you mention both tracking and snap, how many of the problems you have are just Ubuntu-specific, nor modern desktop Linux specific?

          Audio and Video are my primary ones. I am mainly on Arch, assuming I made mistakes only to find out that when I get a work/client laptop with Ubuntu, etc. it also has issues, even though I am not sure they are related. Different OS, different issues.

          Most recent: Manjaro. Thinkpad T14s. Out of the box. My stuff moves to a different screen simply because my monitors goes to sleep. Sometimes playing videos on YouTube freezes the screen it’s playing on for a couple of minutes. Switching my audio output does work sometimes, sometimes not.

          I have had the freezing on Ubuntu (which was the standard by the company) before. Instead of stuff moving to the other screen I had instances, where I had graphical artifacts. And instead of audio output not switching when explicitly selecting it I had issues with it not switching when I start the system with headphones already plugged in.

          20 years ago I was able to get stuff done without such issues.

          I also don’t have issues on other OSs, not on Windows, not on OpenBSD. I checked during debugging.

          I am not the only one with those issues, however fixes don’t work. No specific errors in journalctl/dmesg. People have been reporting these issues of course. Some had other causes. Some changed window manager, some switched hardware, some switched between wayland/xorg (both ways actually), etc.

          I have hopes that these will be fixed eventually, but the whole point of the above was that for my use cases 20 years ago the average Linux distribution of the time did a better job of what I expect. Of course the story might be different for other people, but I don’t think I need to mention that.

    4. 9

      Tired of being an unpaid Microsoft support technician, I offered people to install Linux on their computer, with my full support, or to never talk with me about their computer any more.

      The more time went by, the more I realised that this state of mind was particularly toxic and ultimately disrespectful of the real needs of the people around us.

      I’m using the word ‘we’ here because obviously, I also had this approach at the time (admittedly, a few years later, being a bit younger), but I’m a bit ashamed of the approach I had at the time and today I have a deep rejection for this way of behaving towards a public who often use IT tools for specific needs and who shouldn’t become dependent on a certain type of IT support that isn’t necessarily available elsewhere.

      Who are we to put so much pressure on people to change almost their entire digital environment? Even more so at a time when tools were not as widely available online as they are today.

      In short, I’m quite fascinated by those who* are proud to have done this at the time, and still are today, even in the name of ‘liberating’ (often in spite of themselves) users who don’t really understand the ins and outs of such a migration.

      [*] To be clear, I’m not convinced, given the tone of the blogpost, that the author of this blog post does!

      1. 51

        The more time went by, the more I realised that this state of mind was particularly toxic and ultimately disrespectful of the real needs of the people around us.

        Can we please stop throwing around the word “toxic” for things are totally normal human interactions. Nobody is obliged to do free work for a product they neither bought themselves nor use nor like.

        1. 15

          The “or never talk to me about your computer anymore” if you don’t run it the way I tell you to part, is, IMO, not normal or nice. I’m not sure I’d have called it toxic, but I’d have called it unpleasant and insensitive.

          Of course nobody is obliged to do free work for a product they don’t purchase, use or like. That’s normal. But you can express sympathy to your friends and family who are struggling with a choice they made for reasons that seemed good or necessary to them, even if you don’t agree that it was a good choice. It’s normal listen to them talk about their challenges, etc., without needing to solve them yourself. You can even gently remind them that if they did things a different way, you could help but that you don’t understand their system choice well enough to help them meet their goals with it.

          The problem is telling a friend or loved one not to talk to you about their struggles. Declining to work on a system you don’t purchase, use or like, is of course normal and not a problem.

          1. 14

            I’ve used Linux on my own machines exclusively since 1999, and when I get asked to deal with computer problems (that aren’t related to hardware, networking or “basic computer literacy” skills) I can’t help with, I’ll usually say something along the lines of “you know, you actually probably know more about running a Windows machine than I do” - which doesn’t usually get interpreted as uncaring or insulting, and is also generally true.

          2. 8

            If you buy a car that needs constant repairs you get rid of it and buy something else that does not require it. There is no need to sit with family/friends and discuss their emotianal journey working with a Windows computer. it is a thing. If it is broken have it repaired or buy something else.

            1. 2

              If you buy a car that needs constant repairs you get rid of it and buy something else that does not require it.

              You might. Or you might think that even though you’ve had to fix door closing sensor on that minivan’s automatic door 6 times now, no other style of vehicle meets your family’s current needs, and while those sensors are known to be problematic across the entire industry, there’s not a better move for you right now. And the automatic door closing function is useful to you the 75+% of the time that it works.

              And you still might vent to your friend who’s a car guy about how annoying the low quality of the sensor is, or about the high cost of getting it replaced each time it fails.

              Your friend telling you “don’t talk to me about this unless you suck it up and get a truck instead” would be insensitive, unpleasant and might even be considered by some to be toxic.

              It’s not an emotional journey. You’re not asking your friend to fix it. You’re venting. A normal response from the friend would be “yeah, it’s no fun to deal with that.” Or “I’d know how to fix that on a truck, but I have no idea about a minivan.”

              edit to add: For those who aren’t familiar with modern minivans, they have error prone sensors on the rear doors that are intended to prevent them from closing on small fingers. To close the doors when those fail, it’s a cumbersome process that involves disabling the automatic door function from the driver’s area with the car started, then getting out and closing the door manually. It’s a pain, and if your sensor fails and your family is such that you use the rear seats regularly, you’ll fix it if you value your sanity.

              1. 5

                “Tired of being an unpaid Microsoft support technician…” - no, it’s not venting.

                1. 4

                  As a sometimes erstwhile unpaid support technician, I vehemently disagree.

                  I fully admit that sometimes I stepped into that unpaid support technician role when I could have totally, in a kind, socially acceptable way, said “Wow, it’s miserable that your computer broke. You should talk to {people you bought it from}. I can tell you a lot about computing in general, but they’ll know a lot more about Windows than I would.”

                  And it would’ve been OK, because the people telling me about their problems were mostly venting, not really looking for a solution from me.

                  But as a problem solver, I’m conditioned to think that someone telling me about an issue is looking for a solution from me. That’s not so; it’s my bias and orientation toward fixing this kind of thing that makes me think so.

          3. 5

            Thank you, you’ve put into much better words what I wanted to say than the adjective ‘toxic’, which was the only one I had to hand when I wanted to describe all this.

            1. 20

              How on hell could it be considered as toxic to refuse to support something which is against your values, which requires you a lot of work, which is unpaid while still offering to provide a solution to the initial problem ?

              All the people I’ve converted to Linux were really happy for at least several years (because, of course, I was not migrating someone without a lot of explanations and without studying their real needs).

              The only people who had problem afterward were people who had another “unpaid microsoft technician” doing stuff behind my back. I mean I had been called by a old lady because her Linux was not working as she expected only to find out that one of her grand-children had deleted the Linux partition and did a whole new Windows XP install without any explanation.

              1. 5

                I think there are three aspects to this:

                First of all it is obviously your choice whether you want to give support for a system you don’t enjoy and may not have as much experience with. Especially when you could expect the vendors of that system to help, instead of you.

                But the second part is how you express this: You are - after all - the expert getting asked about providing support. And so your answer might lead them down a route where they choose linux, even though it is a far worse experience for the requirements of the person asking for help.

                The last point comes from the second: You have to accept that installing linux is for those people not something they can support on their own. If they couldn’t fix their windows problems, installing linux will at best keep it to the same level. Realistically they now have n+1 problems. And now they are 100% reliant upon you - the single linux expert they actually know for their distribution. And if you’re not there, they are royally fucked with getting their damn printer running again. Or their nVidia GPU freezing the browser. Or Teams not working as well with their camera. In another context you could say you secured your job. And if only because updates on windows are at least 100% happening, which is just not true on linux.

                I have seen people with a similar attitude installing rolling releases for other people while disabling updates over more than 6 months, because they didn’t have the time to care about all the regular breakage. And yes that includes the browser.

                And the harsh truth is that for many people that printer driver, MS Office, Teams + Zoom and Camera is the reason they have this computer in the first place. So accepting their needs can include “Sorry I am not able to help with that” while also accepting that even mentioning linux to them is a bad idea.

              2. 3

                If I had that attitude towards my wife I would end up very principled and very single.

                (Context: she is blind and needs to use Windows for work. I also have to use Windows for work)

          4. 1

            I also agree with your interpretation a lot more but I doubt the author would mean that quite so literally.

      2. 30

        The more time went by, the more I realised that this state of mind was particularly toxic

        Why on earth should the author be doing free tech support for people on an OS that they didn’t enjoy using?

        1. 5

          Because it’s a nice thing to do for your family and friends and they’ll likely reciprocate if you need help with something different. Half of the time when I get a “tech support” call from my aunt or grandparents, it’s really just to provide reassurance with something and have a nice excuse to catch up.

          1. 8

            Maybe we had different experiences.

            Mine was of wasting hours trying to deal with issues with a commercial OS because, despite paying for it, support was nonexistent.

            One example: Dell or Microsoft (unsure of the guilty party) pushed a driver update that enabled power saving on WiFi idle by default. That combined with a known bug on my MIL’s WiFi chipset, where it wouldn’t come out of power saving mode. End result was the symptom “the Internet stops working after a while but comes back if I reboot it”.

            Guess how much support she got from the retailer who sold her the laptop? Zip, zero, zilch, nada.

            You’re not doing free technical support for your relatives, really: you’re doing free technical support for Dell, and Microsoft, and $BIG_RETAILER.

            When Windows 11 comes around (her laptop won’t support it) I’m going to upgrade the system to Mint like the rest of my family :) If I’m going to donate my time I’d rather it be to a good cause.

            1. 3

              Yes, that was never my experience, and if it had been I would be inclined to agree with you. These days I hear more of “why did I run out of iCloud storage again” or “did this extortion spammer actually hack my email,” which I find less frustrating to answer :)

              1. 3

                Yeah it doesn’t matter for generic tech support, in my experience, what OS they’re running.

                It’s just the rabbit holes where it’s soul destroying.

                Another example was my wife’s laptop. She was a Dell XPS fan for years, and ran Windows. Once again a bad driver got pushed, and her machine took to blue-screening every few minutes. We narrowed it down to the specific Dell driver update. Fixed it by installing Mint :)

                Edit: … and she’s now a happy Ryzen Framework 13 user. First non-XPS she’s owned since 2007.

      3. 15

        Ugh. It’s not “toxic” to inform people of your real-world limitations.

        My brother-in-law is a very experienced mechanic. But there are certain car brands he won’t touch because he doesn’t have the knowledge, equipment, or parts suppliers needed to do any kind of non-trivial work on them. If you were looking at buying a 10-year-old BMW in good shape that just needs a bit of work to be road-worthy, he would say, “Sorry, I can’t help you with that, I just don’t work on those. But if you end up with a Lexus or Acura, maybe we could talk.” He knows from prior experience that ANY time spent working on a car he has no training on would likely either result in wasted time or painting himself into an expensive corner, and everyone involved getting frustrated.

        Similarly, my kids would prefer to have Windows laptops, so that they could play all the video games their peers are playing. However, I just simply don’t know how to work on Windows. I don’t have the skills or tools. I haven’t touched Windows in 20 years and forgot most of what I knew back then. I don’t know how to install software (does it have an app store or other repository these days?), I don’t know how to do back ups, I don’t know how to keep their data safe, I don’t know how to fix a broken file system or shared library.

        But I can do all of these things on Linux, so they have Linux laptops and get along just fine with them.

        Edit: To color this, when I was in my 20’s, I tried very hard to be “the computer guy” to everyone I knew, figuring that it would open doors for me somehow. What happened instead was that I found myself spending large amounts of my own free time trying to fix virus-laden underpowered Celerons, and either getting nowhere, or breaking their systems further because they were already on the edge. Inevitably, the end result was strained (or broken) relationships. Now, when I do someone a favor, I make sure it is something that I know I can actually handle.

      4. 11

        But he didn’t force anyone, he clearly says that if those people didn’t want his help, he could just leave it the way it was. To me that’s reasonable - you want my help, sure, but don’t make me do something I’m personally against. It’s like, while working in a restaurant, being asked to prepare meat dishes when being a vegetarian, except that my example is about work and his story is about helping someone, so there’s even less reason to do it against his own beliefs.

        1. 6

          From my experience being an unpaid support technician for friends and family, that’s the only reasonably approach. I had multiple situations when people called me to fix the result of someone else’s “work” and expected me to do it for free. It doesn’t work that way. Either I do it for free on my own terms, or you pay me the market rate.

          Some examples I remember offhand. In one instance, I tried to teach a person with a malware-infested Windows some basic security practices, created an unprivileged account, and told them how to run things as administrator if they needed to install programs and so on. A few weeks later I was called to find the computer malware-infested again, because they asked someone else to help and he told them that creating a separate administrator account was “nonsense” and gave the user account administrator rights. Well, either you trust me and live more or less malware-free or you trust that guy and live with malware.

          In another instance, I installed Linux for someone and put quite some effort into setting things up the way the person wanted. Some time later, they wanted some game but called someone else instead of me to help install it (I almost certainly would be able to make it run in Wine). That someone wiped out all my work and installed Windows to install that game.

      5. 5

        People expecting you to be their personal IT team for free just because you “know computers” is just as disrespectful. I don’t think it’s unfair to tell people “no if you want help with your windows system you need to pay someone who actually deals with such things”

      6. 3

        The more time went by, the more I realised that this state of mind was particularly toxic and ultimately disrespectful of the real needs of the people around us.

        This is looking at the things with the current context. Windows nowadays is much more secure and you can basically leave a Windows installation to a normal user and not expect it to explode or something.

        However at the time Windows was still the kind of operational system that if you put it on internet without the proper updates, it would be instantly be infected by malware 1. Most users run with admin accounts and it was really easy to get a malware installed by installing a random program, because things like binaries signatures didn’t exist yet. There were also no anti-malware installed by default in Windows, so unless you had some third-party anti-malware installed your computer could quickly become infested with malware. And you couldn’t also just refresh your installation by clicking in one button, you would need to actually format and reinstall everything (that would be annoying because drivers were much less likely to be included in the installation media, so you would need to have another computer that had an internet connection since the freshly installed Windows wouldn’t have any way to connect to internet).

        At that time, it would make much more sense to try to convince users to switch to Linux. I did this with my mom for example, switching her computer to Linux since most things she did was accessing the internet. Migrating her to use Linux reduced the amount of support I had to do from once a week to once a month (and instead of having to fix something, it would be in most cases just to update the system).

        1. 8

          It should be added that if you helped someone once with his Windows computer, you were considered responsible of every single problem happening on that computer afterward.

          In some cases, it was even very strong problem (I remember a computer which was infected by a malware that dialed a very expensive line all the time. That family had a completely crazy phone bill and they had no idea why. Let assure you that they were really happy with Linux for the next 3 or 4 years)

          1. 4

            It should be added that if you helped someone once with his Windows computer, you were considered responsible of every single problem happening on that computer afterward.

            Very much that. It was never the user fault, even if you left the computer in pristine condition, if they had an issue in the same week it was your fault and you would need to fix that.

        2. 3

          However at the time Windows was still the kind of operational system that if you put it on internet without the proper updates, it would be instantly be infected by malware.

          At the same time, however, it was also much more likely that you needed to deal with an application that would only run on windows, a file format that could only be roundtripped by such an application, a piece of hardware that only worked on windows (remember winmodems? scanners sucked, too, and many printers were windows GDI only), etc.

          So convincing someone to use Linux was more likely to cause them a different kind of pain.

          Today, most hardware works reasonably with Linux. Printers need to work with iPhones and iPads, and that moved them off the GDI specific things that made them hard to support under Linux. Modems are no longer a thing for most people’s PCs. Proton makes a great many current games work with Linux. Linux browsers are first class. And Linux software handles most common file formats, even in a round trip, very well. So while there’s less need to switch someone to Linux, they’re also less likely to suffer if you do.

          That said, I got married in 2002. Right after I got married, I got sent on a contract 2500 miles away from home on a temporary basis. My wife uses computers for office software, calendar, email, web browsing and not much else. She’s a competent user, but not able to troubleshoot very deeply on her own. Since she was working a job she considered temporary (and not career-track) at home, she decided to travel for that contract with me, and we lived in corporate housing. Her home computer at the time was an iMac. It wasn’t practical to bring that and we didn’t want to ship it.

          The only spare laptop I had to bring with us so she had something to use for web browsing and job hunting on the road didn’t have a current enough to be trustworthy windows license, so I installed Red Hat 7.3 (not enterprise!) on there for her. She didn’t have any trouble. She’d rather have had a Mac, but we couldn’t have reasonably afforded one at the time. It went fine, but I’d never have dared to try that with someone who didn’t live with me.

          1. 2

            At the same time, however, it was also much more likely that you needed to deal with an application that would only run on windows, a file format that could only be roundtripped by such an application, a piece of hardware that only worked on windows (remember winmodems? scanners sucked, too, and many printers were windows GDI only), etc.

            Yes, but it really depends on the kinda of user. I wouldn’t just recommend Linux unless I knew that every needs from the user would fit in Linux. For example, for my mom, we had broadhand Ethernet at the time, our printer worked better on Linux than Windows (thanks CUPS!), and the remaining of her tasks were basically done via web browser.

            It went fine, but I’d never have dared to try that with someone who didn’t live with me.

            It also helped that she lived with me, for sure ;).

    5. 2

      Just posting mine. It is not really technical as it covers more the philosophical and sociological aspect of our technologies. I also sometimes talk about Unix history, command-line and decentralized systems.

      https://ploum.net/index_en.html

      (I write mostly in French but I give you the link for the English-only version)

    6. 8

      Well, I check all boxes. See : https://ploum.net/index_all.html or https://ploum.net/index_en.html if you want English only.

      You can also browse it on gemini://ploum.net (if you don’t know the gemini protocol, my take is that you will like it. it is basically forcing pages to be exactly like you described)

      My blog is done by having every post or page written as a gemini file then a python script generate the index and convert everything to html.

      Everything (texts, pictures, generating scripts) are contained in a git repository. Which means people could potentially read my blog through git : https://sr.ht/~lioploum/ploum.net/

      Which means that a single 100Mo git repository contains 20 years of blogging, readable everywhere by everybody even without a web browser !

      If you are a command-line nerd, you may appreciate the offpunk browser, that turns every website in what you described : https://offpunk.net/

    7. 76

      Which is more likely?

      1. All of the conspiracy theories are real! The industry managed to keep the evidence from us for decades, but finally a marketing agency of a local newspaper chain has blown the lid off the whole thing, in a bunch of blog posts and PDFs and on a podcast.
      2. Everyone believed that their phone was listening to them even when it wasn’t. The marketing agency of a local newspaper chain were the first group to be caught taking advantage of that widespread paranoia and use it to try and dupe people into spending money with them, despite the tech not actually working like that.

      My money continues to be on number 2.

      Here’s the PDF pitch deck. My “this is a scam” sense is vibrating like crazy reading it: https://www.documentcloud.org/documents/25051283-cmg-pitch-deck-on-voice-data-advertising-active-listening

      1. 35

        It’s a false dichotomy. This conspiracy can be real without all the others. It’s not all or nothing.

        local newspaper chain

        Cox? They’re a little more than that. These are the folks trying to sell me a Contour Voice Remote [1]. It’s not hard to imagine what they’re doing with the data.

        [1] https://www.cox.com/residential/tv/learn/remote.html

        1. 17

          I think you’re hitting the nail on the head here. I think the slides are getting crafty with language.

          The power of voice (and our devices’ microphones)

          They flex the reader into an ingroup mentality with “our” to think they are talking about the devices we all have in our pockets. I think the reason their active listening occurs only once they’ve targeted a very specific geographic area is because it is their devices they are listening to in that area. I suspect they partner with stores, malls, car services, etc to host always-recording microphones attached to reliable power sources, with decent acoustics (i.e. not in someone’s pocket). Then they use BTLE signals, etc that stores already use for tracking consumers around stores to know which consumers are near the microphones and thus might be the ones talking about products they are about to buy. In other words, I think this is targeting people who are in Target (for example) and wondering if Amazon has a better price for the lotion they are standing in front of.

          While less paranoia-inducing, this would also be a sleazy thing to do.

          1. 6

            I suspect they partner with stores, malls, car services, etc to host always-recording microphones attached to reliable power sources

            I think one team at Cox were lying to gullible ad customers, and they actually do nothing of the sort.

            1. 7

              Alexa devices scan local networks to gather intelligence for targeted advertising. A printer exposing ink levels over SNMP results in relevant ads.

              I’m honestly curious why you’d have such generous confidence in repeated convicts, multiple times fined by court for mistreating user data.

              1. 16

                Because scanning local networks to gather intelligence for targeted advertising and exposing ink levels over SNMP are not the same thing as turning on a microphone and listening to what people are saying. Completely different levels of scandal.

                1. 7

                  “I can sense stuff you don’t expect and assume that implies consent to do so” is one level of scandal.

              2. 6

                Can you source that SNMP bit? Sounds interesting. I can’t find a credible source. It feels important to provide sources for such an allegation.

                1. 3

                  Seriously, I’ve now actually looked around and I’m finding nothing. I’m going to go ahead and say “that’s not true” on that one. Perhaps a flag for “needs source” would be nice lol

                2. 1

                  While I’d love to be able to provide my own data for the above statement, I own no Amazon devices apart from Kindle Paperwhite 1. I have never confirmed this on my own (which I should have pointed out for clarity!) but here’s an old-ish reddit post about this: https://old.reddit.com/r/amazonecho/comments/ip5i1c/alexa_now_monitoring_working_with_my_ancient/

                  1. 1

                    So a reddit post? Where someone connected a device to their network and it explicitly integrated with other devices on the network, because it is a home management device for managing devices on a network?

                    I’m dismissing this entire thing. I mean, I could easily justify this behavior, but I’m not going to even think about it when the evidence is a confused, non technical redditor.

                    1. 1

                      I may have misremembered the source but I definitely read about it on some website, not reddit directly (even old.reddit.com doesn’t work for me anymore).

                      I have never owned an Alexa device, but I believe that this “integrating with other devices on the network” thing you mentioned, would make me ditch the device the moment I’d notice that it gathers intelligence to push me to towards making more purchases via Amazon.

      2. 17

        But what’s the alleged “conspiracy”? I don’t think it takes a conspiracy to make this true

        If people are saying that Facebook has a secret API in iOS and Android so they can listen to you when their app isn’t running and permissions are off, then I’d say “no that conspiracy requires too many parties to coordinate without it leaking”

        If people are saying that apps that have permission overcollect data, including audio data, and that this information eventually makes it back to Facebook for ad targeting, then I’d say “that probably happens”

        Why wouldn’t it happen?


        The whole reason sites like Reddit have purposely let their website rot and push the app so hard is because a web browser is sandboxed in a way that a phone is not

        I’m pretty sure the device ID is a huge thing for advertisers – if they didn’t have the device ID, they couldn’t join global profiles together, even when you are logged out

        https://www.appsflyer.com/glossary/device-id/

        https://stackoverflow.com/questions/2785485/is-there-a-unique-android-device-id

        The way I think it works is that every there are a bazillion data streams of different apps, but you are not logged in on all the apps.

        You might look like 20 or 30 different users to the advertisers

        Now the trick is to find a fuzzy join key that can join most of your profiles together, because it’s shown to improve ad conversion rates

        I think basically what happened over the last 10 years is that the joining got better. That’s why people noticed more things “following them around”

        Hell I just got printed spam via snail mail from some company ZocDoc (who I have no relationship and never gave my mailing address or e-mail to), recommending a Dentist that lives with 3 blocks of me, because I’ve been using the web to search for dentists, and I guess all my profile data / location is leaking

        There is so much damn data out there that this is sort of inevitable

        Every damn company is pushing you to install their app, because when you do, it unlocks all the data from the other apps you use. They are pooling their data together, to “improve business for all”


        I guess Apple noticed all this and locked it down a LITTLE, but not 100%. I’d be interested in details/corrections from people who know more

        https://www.vox.com/recode/23045136/apple-app-tracking-transparency-privacy-ads

        https://developer.apple.com/documentation/apptrackingtransparency

        1. 7

          You may have seen in the news that there is implicit rental price collusion among landlords, via data sharing:

          https://www.ftc.gov/business-guidance/blog/2024/03/price-fixing-algorithm-still-price-fixing

          So I think that is analogous to what has happened in the ad industry. There is not necessarily an EXPLICIT secret agreement, but there is so much data sharing that it can give the EFFECT of coordinated action

          i.e. an implicit “conspiracy” rather than than an explicit one

          And no doubt SOME of that data is audio data, collected from phones (where the app has permission).


          Also, the book “Chaos Monkeys” has some good detail on how Facebook came to join data in a way that Google doesn’t (or didn’t at the time)

          As far as I remember it involved pre-Internet “offline” databases of consumer information

          1. 14

            Right: that’s the thing. The rental price collusion among landlords is true. The way advertising companies merge data together from all sorts of different sources is also true. We need to know that those things are true so we can respond to them, because they are real threats to our privacy.

            “Facebook apps listen to you through your phone’s microphone and target ads at you” is not true. Believing it’s true is a distraction from the things we should be reacting to.

            1. 15

              What if I remove one word:

              Apps listen to you through your phone’s microphone and target ads at you

              Is that true from the average person’s perspective? I’d argue it is

              In a different comment, you said

              I care. This is such a damaging conspiracy theory. It’s causing some people to stop trusting their most important piece of personal technology: their phone.

              I’d say their phone is in fact untrustworthy. Why would they trust it? (honest question)

              And I personally behave in a way that’s consistent with that – I have never installed a social media app on my phone ever. If I have to use social media, I use the web

              I worked at Google for 11 years (not anywhere near ads), and I have only a rough idea how it all works, but I know that there is data sharing and tracking without any real consent

              As the salesperson explained, your “consent” is part of the multi-page EULA

              It’s also due to purposely grinding down the web experience and nagging you to death (and again I personally don’t consent by not using many apps on phones)


              The average person doesn’t know what any of these things are

              • operating system (what does that do?)
              • permissions (I just click the button because I want to do the thing that they said I can do)
              • ad network, ad exchange
              • bidding
              • first party / third party

              So again, if someone says, Apps listen to you through your phone’s microphone and target ads at you, then I’m not really going to disagree with them

              Just like they can say landlords collude to fix prices – that also appears to be true, and is a new type of crime enabled by technology

              1. 10

                On the iPhone there’s an orange circle that displays if an app has access to the microphone.

                Yeah, I’ve seen variants of this argument before: phones do creepy things to target ads, but it’s not exactly “listen through your microphone” - but there’s no harm in people believing that if it helps them understand that there’s creepy stuff going on generally.

                I don’t buy that. Privacy is important. People who are sufficiently engaged need to be able to understand exactly what’s going on, so they can e.g. campaign for legislators to reign in the most egregious abuses.

                I think it’s harmful letting people continue to believe things about privacy that are not true, when we should instead be helping them understand the things that are true.

                This discussion thread is full of technically minded, engaged people who still believe an inaccurate version of what their devices are doing. Those are the people that need to have an accurate understanding, because those are the people that can help explain it to others and can hopefully drive meaningful change.

                1. 2

                  (Rewrote 2 comments because I realized they conflated 2 things, and are too long)

                  On the question of whether the salesman is lying, we don’t need to invoke any conspiracy or technical inaccuracy. I think the most likely scenario is:

                  • Cox Media Group does have an app, and they convinced some people to install it, because it does something useful
                  • It records some audio with permission. Whether the orange indicator is on is irrelevant - I have no doubt they are able to get some data.
                  • It uses some hacked together voice recognition to turn audio into text. There is no advanced AI.
                  • This is fed into some industry-wide service, in exchange for other joined data.

                  So the salesman is not lying when he says:

                  “Active Listening” software uses artificial intelligence to “capture real-time intent data by listening to our conversations.

                  Advertisers can pair this voice-data with behavioral data to target in-market consumers

                  Just exaggerating. From experience, these companies don’t have the kind of engineering that FB/Google do.

                  (e.g. Google engineers bypassed Safari protections to collect more data (and paid a settlement); I don’t think most companies do that.)


                  Now, this single Cox Media Group incident does NOT necessarily justify widespread consumer perception that their phones are untrustworthy, or that they are being constantly spied on via audio.

                  But I would ask if you can rule that out.

                  You can consider the CMG app an instance of “grayware”. Surely it’s not the only one that exists. The incentive is there for thousands of such apps to exist.

                  • I am reminded of all those grayware search toolbars that were (are?) so prevalent on Windows machines (I think tens or hundreds of millions of machines). You or I would instantly notice that and remove it, but many users won’t
                    • Did anyone ever consent to them? Weren’t they allowed by Chrome’s or IE’s app permissions? All it takes is a click to consent
                    • Some version of this IS happening right now on phones – we just don’t know how prevalent it is. There are regular “outbreaks” in the Android ecosystem, and no doubt iOS. It’s an ongoing war. (Again the “story about Jessica”, while fictional, I think gives a flavor of how different most people’s experiences with computers, and motivations, are from “us”)
                  • Wikipedia said there are 2.2 million iOS apps, and Android probably has more. Data collection is a huge incentive for basically all of them – otherwise they would just be websites. (It’s expensive to create both Android and iOS apps)

                  It’s a question of degree, not “if it happens”. Trust is also not binary, and some users have experiences that rationally lead them to trust less than others.

                  1. 3

                    This pitch deck does not read to me like the deck of a company that has actually shipped their own app that tracks audio and uses it for even the most basic version of ad targeting: https://www.documentcloud.org/documents/25051283-cmg-pitch-deck-on-voice-data-advertising-active-listening

                    They give the game away on the last two slides:

                    Prep work:

                    1. Create buyer personas by uploading past consumer data into the platform
                    2. Identify top performing keywords relative to your products and services by analyzing keyword data and past ad campaigns
                    3. Ensure tracking is set up via a tracking pixel placed on your site or landing page

                    Now that preparation is done:

                    1. Active listening begins in your target geo and buyer behavior is detected across 470+ data sources […]

                    Our technology analyzes over 1.9 trillion behaviors daily and collects opt-in customer behavior data from hundreds of popular websites that offer top display, video platforms, social applications, and mobile marketplaces that allow laser-focused media buying.

                    Sources include: Google, LinkedIn, Facebook, Amazon and many more

                    That’s not describing anything ground-breaking or different. That’s how every targeting ad platform works: you upload a bunch of “past consumer data”, identify top keywords and setup a tracking pixel.

                    I think active listening is the term that the team came up with for “something that sounds fancy but really just means the way ad targeting platforms work already”. And then they got over-excited about the new metaphor and added that first couple of slides that talk about “voice data”, without really understanding how the tech works or what kind of a shitstorm that could kick off when people who DID understand technology started paying attention to their marketing.

                2. 1

                  To be fair, I mostly agree with you but this is a tricky topic. There are like multiple ‘conspiracies’ & multiple claims..

                  I do not believe my personal iPhone has recorded any audio that’s made it “downstream” to the ad/behavior/intent market. That is seemingly not how the sausage gets made.

                  However, there are lots of audio sources that I believe feed the downstream: voice control remotes, video doorbells, anything I say in a supermarket. I do not think there’s much conspiracy in saying all that audio is fair game. If all that audio is on the table, so to speak, it’s going in the sausage; they’re not leaving it on the floor.

                  Other sources, I’m not sure. Hey Alexa, I’m not sure (I don’t own that stuff). Voicemail speech-to-text, I’m not sure. Baby monitors, I’m not sure.

                  So there’s the too-far-too-specific claims like your-iPhone-is-listening-always that they can confidently deny. (without mentioning they don’t even need that, as much as they’d like it) Is it bad journalism to jump to the (likely false) conclusion? Sure. I don’t know why they do that. It muddies the topic, and gives them an out. That is not a hill I wanna argue on.

                  But to claim the slides are faked ?? That’s wild, to me. There’s clearly legitimate sources for this audio, and business interest, and technical capability. The sausage does get made, it would seem. (the question is: out of what?) The slides do not say that your-iPhone-is-listening-always so it’s like there are 2 conversations going on. A tricky topic.

                  1. 2

                    I do not think there’s much conspiracy in saying all that audio is fair game.

                    There are relevant laws to consider. The US has various federal and state level wiretapping and eavesdropping laws. There are privacy laws like GDPR in the EU and CCPA in California. Illinois even passed its own “Keep Internet Devices Safe” act, albeit with lobbyist alterations that will stir up skeptics even more.

                    1. 9

                      Tech companies do a lot of bad things. That’s why I care about us accurately describing the bad things they do, rather than saying “Yeah, Facebook probably advertise to you based on listening to what you say through your microphone, that’s the kind of thing they would do.”

      3. 12

        I couldn’t agree more. The technical aspects of option 1 are usually overlooked, especially when it comes to power usage. I have some experience with audio fingerprinting with smartphones (kind of like Shazam but for TV/Radio commercials). Even turning on the mic for one second every 10 will absolutely obliterate your battery. This is not just in terms of daily consumption, expect the overall battery life to be severely reduced. Back in the days you would have to swap your battery for a new one every other month. That is to say: people will notice if any app is sampling the microphone constantly.

        1. 13

          Back in the day where I had a Google account and an Android phone (circa 2018), I did the experience of downloading all the data that Google had about me in their cloud. Inside, I found many audio records that appeared to be random part of every day.

          I could hear myself playing with my children far from the phone. I had glimpse at several conversations with my wife.

          Those were actual audio files stored on Google cloud. I had no knowledge of it. I had never asked anything to be recorded. In fact, I even had “Ok Google” disabled (because of false positives). Yet, those audio files were there and there was nothing preventing Google to analyse them.

          In fact, for some snippets, I even suspected that my phone was in airplane mode while it was recorded (my phone is, by default, in airplane mode at home). So those were probably recorded and then sent afterward.

          It was six years ago. At the time, I, like you, considered that phones could not listen all the time but I had to surrender to evidence : Android phones do listen all the time and send random audio excerpts of your life on Google servers. That’s a hard, indisputable fact.

        2. 9

          How does “OK, Google” or “Hey Siri” work?
          One thing is listening thru mic and another is sending it out from the device. I think simpe voice pattern matching with predefined and tailored set of keywords downloaded regularly to the device can be kept low enough to not make a notice of the additional power consumption.

          1. 25

            There’s a dedicated low power chip for those “wake words”, at least on iPhones.

            Only Apple can update the firmware for that.

            1. 4

              Only Apple can update the firmware for that.

              supposedly, with almost no way to verify, and big co’s have used device exploits in the past for gain so I still default to zero trust with all devices, apple included.

                1. 3

                  Because it’s worth hundreds of millions in recurring monthly income?

                  1. 5

                    I just don’t think it is.

                    If you want to make a ton of money effectively targeting ads at people, I think you want to know their age, gender, location and general demographics.

                    Snippets of conversations they had are so much less useful than that. What if they were sat in a coffee shop next to some loud talkers? What if they left the phone near the radio?

                    I’ll believe audio snippets from phones are valuable when they become a serious part of the conversation around selling ads (and I don’t mean the Cox team who briefly promoted this last year and then dropped all references to it).

                    1. 2

                      I don’t believe Facebook is hacking anything, but in terms of using audio for targeting it’s quite doable.

                      Smart TVs already have ability to identify what you watch and listen to, and they’re not hiding it.

                      ML around sound recognition has gotten really good recently. Detecting radio or non-conversational speech is perfectly doable. It’s also possible to estimate age and gender of speakers.

                      Even without phone location access FB knows where its long-term users live from IPs & usage patterns + GPS clusters of photos.

                      Note that the data doesn’t have to be perfect, nor explicit. It’s just more features to throw into the big machine learning pile.

                      1. 1

                        Smart TVs already have ability to identify what you watch and listen to, and they’re not hiding it.

                        I think that’s a case in point: we all know smart TVs tell the mothership what TV shows you are watching. It’s not a conspiracy theory. It’s well known. (Incidentally, now that you can’t rely on public TV ratings anymore, this data is very valuable to the streamers to know which of their competitors shows are most popular.)

                        How would turning on microphones be secretly burning zero days and not telling anyone and yet still raking in sufficient money to make up for it? It doesn’t make sense. If they were doing it, it wouldn’t be a secret.

          2. 2

            Yeah. If I were in charge of avoiding mic use detection, I’d use beacons/geofencing to only listen in commercial zones, to increase the likelihood of picking up something useful. Avoid the radio draining the battery by saving to upload only when wifi is available. And minimize/skip actual processing of audio on the phone, let remote servers handle that.

        3. 3

          I am also suspicious of these claims for the same reasons as you. However, reading through the linked slides, the time when the microphone would need to be active seems pretty narrow. They claim to only do so after you’ve paid a daily rate for a specific area and once they know the ad metadata that is best associated with your product. They could pretty easily use other data sources to eliminate most potential targets in the area and be sure to only listen to each phone once for multi-day engagements. So this would look like one of those times where you thought you had half a battery but a few hours later you’re at 10% and you can’t quite remember if you did actually have half a charge.

          The other reason I’m very suspicious of these claims is that voice data just doesn’t seem that helpful. I don’t talk to anyone about most of the things I buy. The shopping conversations I have with my wife are “did the toilet paper ship yet?” (well after a purchase) and “should I grocery shop on Saturday or Sunday?” (with no content about products). Intersect that with the probability of you listening when I happen to be talking about it and we must be near $0 expected value.

        4. 2

          This is a good point, but the techcrunch article mentions smart TVs, which are conveniently plugged into the wall.

      4. 9

        For some people it’s more comforting to believe that malevolent global powers are spying on you than to accept that you’re not that unique, and that using a few fairly public signals you can be characterized and have ads targeted at you fairly accurately.

        1. 6

          Anecdotally, I spoke on the phone with my dad about an upcoming visit to a family member several states away in GA, and later that day told my wife in person that I needed to make a dentist appointment soon. Later that day I got a YouTube ad for dentists in the town I was going to visit in GA.

          This isn’t proof of anything, but it’s a hell of a lot more than “you’re just not that unique, get over yourself”

          1. 4

            You can visit https://myadcenter.google.com/u/0/home to see a bunch of information about what Google are using to target ads to you - and https://myactivity.google.com/myactivity for even more detail.

            Oh interesting! There’s actually a setting on https://myactivity.google.com/activitycontrols?utm_source=my-activity for “Include voice and audio activity” which defaults to off - but the information panel about it explains that if you turn this on they use your “Hey Google…” audio snippets like this:

            Google uses audio saved by this setting to develop and improve its audio recognition technologies and the Google services that use them, like Google Assistant.

            1. 2

              This scenario is explainable with phone company voice transcription.

          2. 4

            You get a lot of ads for things other than dentists, and you probably don’t notice dentist ads when you’re not thinking about needing to make an appointment. As for the geographic specificity, I’d blame a web search or map lookup or a data broker buying your travel plans from an airline company or something.

            I’ve worked on the audio stack for mobile devices and you really couldn’t justify the power consumption for always on recording, let alone voice recognition and uploading it to the cloud.

          3. 1

            I’m not saying something didn’t listen to you and make targeted ads for you, but it seems odd to assume it was your cell phone.

            Cell phones are battery powered and resource constrained. Much easier to use things that are always plugged in that are around you, or have someone in the middle of the communication path listen in. Where they are not as resource constrained.

            It would be very interesting if you spent the time trying to figure out what if any device it might in fact be, and get network traces to prove it.

            I’ve never had anything like this happen to me, but it’s also possible that it never will since I have little tech near me that could listen in and block almost all ads from reaching me anyway.

            1. 3

              I don’t have any smart devices or assistants, so it would have been either my phone or my laptop ¯_(ツ)_/¯

              I spent a good bit of time rapping my brain on what else could have brought that up but I hadn’t done any googling (already had a dentist) or maps searches (I’d been to that family members house before)

              It would be really interesting to get some network traces though, I’ve considered setting something up to block ad trackers in genera

      5. 7

        If you can come up with a reasonable explanation for why I get ads 5 minutes after talking about something that I am positive is:

        1. Not something I’ve ever searched for directly
        2. Not something I can even use
        3. Not in anyway related to other interests

        I’ll listen. I just spoke of a product. Going to browse the web a bit and see if I get related ads.

        (Edit: To be clear, I don’t believe the mic idea. I do think that human behavior is easy to manipulate, and engineer. But have, on numerous times, wondered what the odd set of steps were that lead to being served an ad for Rice-a-Roni — the product I spoke 30 minutes ago, and am now seeing ads for. Something caused me to believe I have no connection to that item—it’s not in the stores I shop at. Not something I can even eat.—yet, here I am being targeted for it. The explanation of “conspiracy” is just obvious, right?)

        1. 8

          Coincidence.

          Try this exercise: make a note of every time you say anything out loud within range of a microphone. Then note how often you see an ad related to the thing you said within the next five minutes. The goal here is to count how often you DON’T see an ad relating to a snippet of audio.

          This exercise is deliberately absurd, because nobody would ever make notes that detailed about what they were saying… but if you did, I bet the number of times a relevant ad came up would be a fraction of a fraction of a percent.

          And that’s what’s happening. We don’t notice all of the times that we say something and our devices DON’T then show us an advert - but when it does happen (purely out of coincidence, combined with our broad demographics: I see ads that a 40-something Californian male might be interested in) we instantly associate it with our recent conversations.

        2. 4

          Why were you talking about it if you have no connection to it? How did it enter your mind?

        3. 3

          Are you sure it’s not more simply explained by selective memory? I frequently catch myself looking at a TV at the gym and seeing an advertisement for something that seems targeted to me, only to realize it’s impossible. I don’t tend to remember the other ads.

      6. 5

        I think your comment is a useful corrective, but I’ve only been hearing people I know talk about hyperspecific targeted advertising that can be connected to your speech for a few years. So it wouldn’t be that the industry was doing it for decades, but that they were doing it for a few years.

          1. 4

            …I had one of those terrible moments where I started to say “yeah, just a few years ago”, but though my felt sense of time is wrong that 2017 is just a few years ago, it’s also not decades, meaning we were both off by a bit.

            And to your point, keeping it a secret for 7 years is more work than for 3.

      7. 3

        it’s not conspiracy when it’s true. If the code is closed source and the employees sign NDAs how are people supposed to get evidence? It’s not even that crazy to think that without evidence. It happened hundreds of times to me and people I know that after mentioning something in a conversation a related ad pops up on fb. Now I might not know the technical details, but it’s definitely not a coincidence if it happens every single time to everybody.

        1. 4

          It doesn’t happen “every single time to everybody”.

          If this has been going on for the past 5-10 years enough people know about it that somebody would have leaked - NDAs are one of the reasons journalists sometimes grant anonymity to their sources.

          (Conspiracy theories can be true - but this one definitely isn’t.)

          1. 3

            How would people even know? Everybody who cares about this is using an ad blocker, right? Right?

          2. 2

            I don’t know why you insist on the conspiracy theory angle. It’s e.g. no state secret that google collects data of your every movement through google maps and google play services. The average person doesn’t care as long as they can get an uber. Same thing with FB, the average person won’t care that their audio data is collected without their consent as long as they can use facebook. People are far less concerned about data when they are asked to change their habits. FB doesn’t need to hide things, but what happens to fb if this comes out with evidence? Nothing. Because at best people don’t care. And those who care are not on facebook. So what’s the great conspiracy here? Data collection is old news

            1. 4

              I spent some time yesterday digging through the Facebook and Google tools that allow you to view and export the data that they are using for targeting ads to you.

              They are actually extremely transparent: You can see exactly what kind of location data they are keeping, plus lists of companies that they have identified you as interacting with.

              There is no hint of the kind of audio data what we are discussing here. The closest is Google’s defaulted to off preference that allows them to use your “hey google” audio snippets for further improvements to that model.

              Why would they be transparent about all of their other creepy location data, but entirely omit the audio stuff?

              I think because they are not storing audio content in the first place.

      8. 3

        More likely, who cares? We’re focused on finding which is more profitable, and it turns out misinformation is wildly more profitable than providing a useful and reliable service. Lying to people and making society dysfunctional is a small price to pay ;)

        1. 18

          I care. This is such a damaging conspiracy theory.

          1. It’s causing some people to stop trusting their most important piece of personal technology: their phone.
          2. We risk people ignoring REAL aprovecha threats because they’ve already decided to tolerate made up ones.
          3. If people believe this and see society doing nightingale about it, that’s horrible. That leads to a cynical “nothing can be fixed, I guess we will just let bad people get away with it” attitude. People need to believe that humanity can prevent this kind of abuse from happening.
          1. 10

            People need to believe that humanity can prevent this kind of abuse from happening.

            The evidence seems to suggest we can’t, given humanity can’t even ameliorate its own rapidly approaching downfall.

          2. 10

            People need to believe that humanity can prevent this kind of abuse from happening.

            People shouldn’t believe things the evidence keeps pointing away from. Are you aware of any instances of someone not getting away with this kind of thing in the last decade or two?

          3. 10

            I’d say the real damage is 4. it further estranges folks from an understanding of their property, making it harder for them to control. On (1), phones shouldn’t be trusted by default, not as long as their manufacturers and carriers insist on being so undeserving of trust.

            But you’ve talked to Facebook users, right? On (2), a close friend replied to learning that Facebook materially contributes to three genocides by explaining that they only use it to stay in touch with friends and family, and also Marketplace. On (3), they see anybody with phone discipline as engaging in some sort of illusory moral elitism rather than genuinely caring about health and safety. Facebook is a real threat and people invite it in anyway.

          4. 6

            I also care and I think the right view here is that this is happening.

            If there is a chance that ad-partners are injecting this kind of functionality into Facebook then Facebook needs to fix that. If there is a chance that people are giving shady apps too much access to their microphone then Apple and Google need to fix that.

            It doesn’t really matter to me at what scale it’s happening, it should be next to impossible. I’m sure there will always be people with sufficiently low moral standards to do it if they can figure out how.

            1. 4

              Is there any technical evidence this is happening, or conjecture based on “pattern recognition” (cognitive biases) and these slides?

          5. 3

            I was being facetious, sorry if that wasn’t clear. This is dangerous misinformation.

      9. 2

        Strong agree here. I got a major sense of deja vu from this story - pretty sure some other random marketing company made the same claim a few years ago and it was swiftly debunked?

        1. 1

          From that video’s own description:

          As pointed out in the comments, there are too many flaws in my methodology to draw any conclusions (for instance I am live streaming directly to YouTube which of course necessitates recording my microphone the whole time).

          1. 1

            Well, someone should repeat the experiment without livestreaming then. What we’re doing instead is deliberating whether it’s a conspiracy theory or not while the truth is within reach of a scientific experiment.

            1. 13

              The fact that nobody has successfully produced an experiment showing that this is happening is one of the main reasons I don’t believe it to be happening.

              It’s like James Randi’s One Million Dollar Paranormal Challenge - the very fact that nobody has been able to demonstrate it is enough for me not to believe in it.

      10. 2

        Yeah my bullshit detector has gone off on this every time it’s been “proven”. I understand that ad targeting is really good, but it just doesn’t pass the smell test to think that a weirdo marketing agency has figured out a way around Apple’s permission structure and not, say, the NSA

        1. 5

          I’m not making a claim either way, but I think your logic is flawed there.

          Finding out a “weirdo marketing agency” is doing it doesn’t say anything about if the NSA is doing it.

    8. 1

      I’m really waiting for mine but it seems that it still somewhat hackish to get it working:

      https://alexschroeder.ch/view/2024-08-06-pocket-reform

      1. 1

        Yes, Alex encountered a few of the things I did there, as well. The IRC and forums are helpful.

        • the standby power switch he refers to as poking in the hole, in my unboxing video I mentioned that I unscrewed the side panel so that I could see it more clearly.
        • don’t mess with uboot on the Pocket Reform - this has now been clarified in the latest version of the tools package after my feedback…
        • Tuba the Mastodon client does indeed want you to have a keychain manager and I also have installed Seahorse for that (this is stuff i need to cover in a follow-on post)
    9. 3

      While I like the idea, I would say that the very first question is “what BSD should I choose ?”

      But maybe your target is more people who have to use a given BSD (probably coming from Linux)

      1. 3

        The BSD People’s Front!

        1. 4

          Oh I thought we were the Popular Front.

    10. 2

      This is fascinating. I will dive into this blog to see how one could switch from vi to ed as a daily driver.

      If author is reading this (no contact info on the blog), I’m eager to have your feedback about Offpunk as it allows you to surf the Web using less.

      1. 2

        I will dive into this blog to see how one could switch from vi to ed as a daily driver.

        Excellent (unintentional?) example of [Poe’s law]((https://en.wikipedia.org/wiki/Poe%27s_law).

          1. 1

            Right, I’m aware of that. Poe’s law applies there too. It’s Poe’s law all the way down. To be more explicit, I don’t mean this as an insult to you or the original blogger, but this whole conversation about using ed as a daily driver reads like a (possible) parody of xkcd.

    11. 39

      An incredibly long ramp up to complaining about centralised control by rent seekers (a very reasonable complaint!) which gets bogged down in some ostensibly unrelated shade about whether client-server computing makes sense (it does) or is itself somehow responsible for the rent seeking (it isn’t; you can seek rent on proprietary peer to peer systems as well!) to then arrive at:

      There’s going to be a new world of haves and have-nots. Where in 1970 you had or didn’t have a mainframe, and in 1995 you had or didn’t have the Internet, and today you have or don’t have a TLS cert, tomorrow you’ll have or not have Tailscale. And if you don’t, you won’t be able to run apps that only work in a post-Tailscale world.

      The king is dead, long live the king!

      1. 7

        So it’s just an ads for tailscale ? Is it really relevant or just clickbait title ?

        1. 12

          I mean it’s on the tailscale blog… There should really be a tag for corporate blog posts.

          1. 15

            I recall finding a lot of tailscale blog content interesting in the past, which is part of what made this post so jarring.

    12. 0

      TL;DR: put your open source code under the AGPL license.

      This sucks. BSD/MIT is the way to go. Easy contributions from industry.

      1. 13

        I suggest that you spend a little time thinking about “why the industry would not contribute to GPL licensed software?”. I mean really thinking about it.

        Or maybe just read the whole article before commenting.

        1. 6

          I have participated in thriving commons around horrible terrible no-good very-bad “open source” permissive-licensed software.

          And I read your entire article and didn’t find any compelling argument in it. To be honest, it comes across as a bunch of unrelated complaints that you hoped would all fit together into a single omni-complaint against “corporations”, but you’ve failed to make the case for why or how they should all fit together or why getting rid of them would lead to a magically better world. There was a time when they didn’t yet exist, and we have history books to tell us what it was like, and people were not happy with it!

          1. 4

            The argument seems fairly obvious–if corporations want to run an AGPL dependency in their proprietary software, they can either release their own software as AGPL, or buy a license for the dependency. This seems very fair and equitable–and sustainable to boot. This is why you see many smaller SaaSs nowadays that are AGPL, eg https://reacher.email/

            1. 5

              The primary use case of the AGPL, in the current era, is to allow the original author of a piece of software to maintain a monopoly on commercial exploitation of that software, but creating an explicitly unfair playing field.

              Suppose I write a piece of software that I release under AGPL and which grows popular. All I have to do is get contributors to sign a CLA (pretty standard) or even a full copyright assignment (FSF is notorious for its long-running requirement to do this), and I can build a VC-backed SaaS startup which mixes the software with proprietary extensions/features that I keep private. But nobody else can do that because the AGPL forbids them from having internal-only modifications – they must release their changes and lose any competitive advantage they might get.

              1. 2

                But nobody else can do that because the AGPL forbids them from having internal-only modifications – they must release their changes and lose any competitive advantage they might get.

                Wait, this doesn’t compute. If the original company publishes the project under AGPL, and requires CLA so that they can switch to closed source eventually, doesn’t forking and improving the software under AGPL without CLA create a rift between the two versions?

                If the original company decides to reincorporate changes from the fork, it infects it’s own code base with code they cannot close source anymore. And the whole work thus cannot be closed source because of AGPL virality. Thus the second company can freely pull from the upstream, add their own features and thus hold an advantage?

                Also, contributing, even under CLA, does still give you the ability to keep the whole work you improved before it was close sourced and possibly work from there. Or find a new vendor to maintain you a fork if you are large enough consumer or a consortium of consumers, consumer cooperative…

                1. 5

                  Wait, this doesn’t compute. If the original company publishes the project under AGPL, and requires CLA so that they can switch to closed source eventually, doesn’t forking and improving the software under AGPL without CLA create a rift between the two versions?

                  IANAL but my read is that it’s not about making it closed-source eventually, it’s about you having the ability to do whatever you like with it because you’re the copyright holder and so the license terms don’t actually apply to you. (Effectively, you can relicense it as whatever to yourself.) So they can maintain a closed fork, or otherwise use/deploy/link the software in ways the AGPL doesn’t permit, because as copyright holder of the entire work they can (implicitly?) relicense it to themselves at will. If they didn’t hold the copyright for all of it (or have CLAs allowing them to relicense all of it), they couldn’t do that.

                  I might be wildly off on all of this, someone please correct me if so.

                  1. 1

                    I can’t many such uses besides running it as a SaaS without sharing the code (close sourcing it). And once they incorporate changes from forks, they can no longer do that. So it’s either embrace the AGPL or go close source immediately, or risk somebody undercutting you. So I don’t quite see how AGPL would aid one in building a monopoly.

              2. 2

                I can build a VC-backed SaaS startup which mixes the software with proprietary extensions/features that I keep private.

                No you can’t. Not with AGPL. Not even if you fully own the copyright by using CLAs. That’s the point of AGPL, the licensee has the right to ask the licensor (the copyright holder) for the source code and get it even if the licensee only accesses the software over a network interface. There is no distinction between copyright holder and any other entity.

                1. 5

                  The distinction of the copyright holder is that they can decide who gets what licensing terms, so of course their startup is not using the software under AGPL.

                  1. 3

                    OK understood. But, how is this worse than what weak copyleft licenses like MIT allow? Their users can take any software and use it in proprietary products without paying anyone anything. At least with AGPL, the primary developer of the software can earn a fair revenue for their efforts.

                    1. 3

                      It’s not worse! Richard Stallman endorses it and has suggested it to companies.

                      https://www.gnu.org/philosophy/selling-exceptions.html

                    2. 2

                      At least with AGPL, the primary developer of the software can earn a fair revenue for their efforts.

                      Which of the Software Freedoms guarantees “You will get paid fair revenue for the Software”?

                      Nobody ever promised you a successful business model. But people who keep adopting the AGPL (and as already noted, you’re wrong about the AGPL not working for this – the whole point, as noted above, is that the original author is not bound by the AGPL while all potential competitors are) are doing it specifically to try to prop up a business model. Neither I nor you should care about whether their VC-backed SaaS startup succeeds. Especially you should not care, if “the commons” is what you care about, because using AGPL in this fashion is explicitly anti-“commons”.

                      1. 5

                        using AGPL in this fashion is explicitly anti-“commons”.

                        Richard Stallman disagrees with you.

                        https://www.gnu.org/philosophy/selling-exceptions.html

                        When I first heard of the practice of selling exceptions, I asked myself whether the practice is ethical. If someone buys an exception to embed a program in a larger proprietary program, he’s doing something wrong (namely, making proprietary software). Does it follow that the developer that sold the exception is doing something wrong too?

                        If that implication were valid, it would also apply to releasing the same program under a noncopyleft free software license, such as the X11 license. That also permits such embedding. So either we have to conclude that it’s wrong to release anything under the X11 license—a conclusion I find unacceptably extreme—or reject the implication. Using a noncopyleft license is weak, and usually an inferior choice, but it’s not wrong.

                        In other words, selling exceptions permits limited embedding of the code in proprietary software, but the X11 license goes even further, permitting unlimited use of the code (and modified versions of it) in proprietary software. If this doesn’t make the X11 license unacceptable, it doesn’t make selling exceptions unacceptable.

                        I consider selling exceptions an acceptable thing for a company to do, and I will suggest it where appropriate as a way to get programs freed.

                        1. 2

                          I don’t understand the focus on selling exceptions here. None of the companies I’m talking about are in the business of selling exceptions to the GPL or AGPL – they’re in the business of building their own, non-Free, proprietary SaaS stacks which they can do because they hold the copyright, and releasing select parts of it under AGPL or similar licenses which make it difficult/impossible for any other business to compete with them. Often they started out with those bits under a more permissive license, to build interest and a community, and relicensed to AGPL or “source available” later on when they decided they didn’t want competition.

                          If you can find Stallman saying that is a great and good thing that he’s OK with, it’d be news to me, because the net effect is not to create more Free software, it’s to create more non-Free software and to use copyright for the exact purpose – enforcing a de jure commercial monopoly – that Stallman famously rebelled against!

                          1. 4

                            If you can find Stallman saying that is a great and good thing that he’s OK with

                            I emailed him and he responded. This is what he said:

                            It is my understanding that as the copyright holders they have the right to do it without any problems. They leverage the AGPLv3 to make it harder for their competitors to use the code to compete against them.

                            I see what you mean. The original developer can engage in a practice that blocks coopertation.

                            By contrast, using some other license, such as the ordinary GPL, would permitt ANY user of the program to engage in that practice. In a perverse sense that could seem more fair, but I think it is also more harmful.

                            On balance, using the AGPL is better.

                            1. 3

                              Thank you for reaching out to him and sharing!

                          2. 2

                            I see what you mean. I don’t think he’s written anything on that particular manifestation of the “leverage the AGPLv3 into a business model” idea. One would have to directly ask the man to be sure but I assume he’s OK with it.

                            The fact is using the AGPLv3 in any way whatsoever adds free software to the world. That is inherently a better outcome than it remaining proprietary or open source. It maximizes our freedom as users and as free software developers. At the same time, it maximizes our leverage as the copyright owners. The corporations cannot tolerate the AGPLv3, so the solution is provided in the form of business. They must pay for it. Whether the payment is for permission to link the software or for a software as a service platform seems like a small detail to me. Perhaps it is not, I haven’t formed an opinion yet.

                            I wonder if it would be okay to email him and ask.

                      2. 1

                        Where does it say that a Free Software software vendor can’t earn a fair revenue for their software?

                        I understand your point about the copyright holder being able to dual license with AGPL and a proprietary license giving them an advantage, but if I care about the software commons, then I will actually refuse to use the proprietary licensed SaaS version and use the AGPL version instead. I can even pay another vendor to support this AGPL version and add functionality that I need. That’s the benefit that I get from the copyleft license.

                        Of course for those who don’t really care about user freedoms, they can just use the proprietary licensed version. But they won’t be able to use the new functionalities developed by other entities who hold those respective copyrights and refuse to assign them over. So yes, the original copyright holder does have some advantages, but they definitely don’t hold all the cards. The playing field is more level than you are making it sound here.

                        1. 2

                          Where does it say that a Free Software software vendor can’t earn a fair revenue for their software?

                          Well, the original post here is a huge screed against “corporations” and in favor of “commons”, and while Stallman himself isn’t, a lot of the folks who promote Stallmanism and copyleft are very strongly anti-capitalist. Also, the whole usual criticism of “Open Source” versus “Free Software” is that “Open Source” is some sort of sanitized business-friendly money-making initiative, as compared to the pure and lofty idealism of Free Software.

                          Plus, once again, nothing ever promised you that Free Software would get you a profitable business model. Free Software promises exactly and only the Four Freedoms.

                          1. 2

                            Anyone who insists that Free Software is anti-capitalist and anti-commercial, is deeply misunderstanding it. Free Software creates a level playing field in the market for software vendors. There is nearly perfect freedom of movement from one vendor to another. It creates nearly the economist’s ideal perfectly competitive market. This is capitalism in its purest form.

                            1. 1

                              Go make that argument to the author of the OP article.

                              1. 1

                                The OP makes this exact argument:

                                If there was no support contract prior hand, let them burn.

                                When they say don’t legitimize the problem by paying OSS developers, they are talking about paying them with donations like Patreon. They are not talking about carrying on commerce by selling OSS services.

          2. 2

            One thing to keep in mind—the MIT/BSD license is developer friendly (hey! Free code! Use with your proprietary software!) and potentially user hostile (user might not be able to see the source code, nor fix bugs themselves [1]), while the GPL is user friendly (they can see the source code [2], fix bugs, release their modifications, etc), and possibly developer hostile (users could release their code to the wild!). Note the language between MIT/BSD (permissive) and GPL (viral).

            [1] Yes, I know, most users won’t care about seeing the source code, much less trying to fix it. But “users” also include “programmers”.

            [2] Only if the user requests to see the code, and only within a three-year window from receiving the program.

            1. 3

              For decades, Stallman and the FSF and those in Free Software circles insisted that preservation of user freedom was the most important thing imaginable, and could never ever be compromised to even the tiniest possible degree, and that there could never ever be any cause moral enough to justify such a compromise anyway.

              Just try suggesting that perhaps even the tiniest exception to the GPL might have large benefits, and you’d get flamed to a crisp with page after page after page of high moralizing lectures explaining how disgustingly evil the mere thought of that is.

              The idea that the GPL does not allow you to pass along less freedom than you yourself received, ever, for any reason, under any circumstance, and that this can never be compromised, by anyone, ever, for any reason, was the adamantine unbreakable eternal guarantee.

              And then they went and broke it with the AGPL. The AGPL explicitly passes along less freedom than the plain GPL, and thus should have been not only demonized as the work of freedom-hating enemies but also ruled obviously incompatible with the GPL. But the FSF literally inserted, by fiat, a compromising exception to the GPL’s guarantee of freedom to accommodate the AGPL.

              As the saying goes, in that moment they (Stallman, FSF, etc.) revealed exactly who and what they were. The rest is just haggling over the price.

              1. 3

                Your entire understanding and premise are incorrect, see my comment https://lobste.rs/s/jqucbu/on_open_source_sustainability_commons#c_7cdcey

                1. 2

                  You’ve already been told why you’re completely wrong, but to reiterate:

                  The original author holds the copyright to the software and does not need a license in order to modify or distribute it. It’s only other people who need a license to give them permission to do that, and those other people can only do so under the terms of the AGPL. The original author can do whatever they want and is not bound by the AGPL.

                  This is the genuine and increasingly-popular strategy of quite a few “open source” SaaS startups – they use the AGPL to bind potential competitors and try to improve their own market position.

                  1. 2

                    The original author holds the copyright to the software and does not need a license in order to modify or distribute it. It’s only other people who need a license to give them permission to do that, and those other people can only do so under the terms of the AGPL. The original author can do whatever they want and is not bound by the AGPL.

                    This is true for any license. Not sure why you’re singling out the AGPL.

                    1. 2

                      The claim being advanced was that somehow the author of a piece of AGPL’d software would be on equal terms with everyone else due to also being bound by the AGPL. I was pointing out that is not the case.

                      And the broader point i’m making is that people have used AGPL and “source available” type licenses as anti-competitive/pro-monopoly tactics – they are not using these licenses to advance the cause of software freedom, they’re using them to try to protect their VC-backed SaaS startups.

        2. 4

          I suggest that you spend a little time thinking about “why the industry would not contribute to GPL licensed software?”. I mean really thinking about it.

          I have, and talked to various lawyers about it. It boils down to two things: flexibility and risk.

          Risk is the big one. The terms of the MIT license are easy to comply with. I’ve worked with some companies that are maintaining internal forks of GPL’d software and not contributing anything upstream because they’re worried that they might not be in compliance with the license (they think that are) and contributing would highlight them as downstream consumers.

          Flexibility is the other big one. Today, their use of an open-source component may be somewhere away from competitive advantage and so they’re happy to upstream everything (very happy, it reduces their maintenance costs). Tomorrow, that may not be the case and they may want to keep some part private.

          I’ve talked to quite a few companies that would rather do a proprietary in-house clean-room reimplementation of something than take a GPL’d dependency for these reasons. The best outcome here is to persuade them to do a permissively licensed reimplementation.

          The only time that a company will take a GPL’d / AGPL’d component and not just reimplement it is if the component is sufficiently large that it would be infeasible for them to do so. This is never the case with young projects.

    13. 12

      I think “TL;DR: use the AGPL” doesn’t help. The AGPL doesn’t make sense for a lot of tools that operate over the network but don’t use HTTP, and the license text is massively confusing. I like the EUPL. It’s a shame there’s no examples of it being enforced outside of the EU (yet), but the license text is clear. The FSF has a bogus opinion on the EUPL that it’s GPL-incompatible because of it’s opinion on combined works, but the primary author of the EUPL disputes their assement here.

      TL;DR use the EUPL v1.2.

      1. 11

        Author here: thanks for the pointer, I didn’t know that FSF was disputing compatibility of EUPL with GPL. I like EUPL too.

        But people like you were not the real target of this post. It is mostly people using MIT and it worked : several readers told me they reconsidered using MIT for copyleft licenses.

        1. 2

          I think the general message of “prefer copyleft licenses” can be something that definitely resonates.

      2. 6

        Can you elaborate? There’s nothing in the AGPL about HTTP or any intentional preference towards HTTP there.

        1. 3

          I was remembering this thread, notably @kyrias’ comment - how can you prominently offer the user access to the source code over a non-HTTP protocol like SIP? Is putting it in a header that might be stripped prominent? I wouldn’t want to fight with lawyers over it for sure.

          1. 1

            Wherever the original author put it is prominent as far as the author is concerned, so follow what they did.

            1. 2

              As far as the author is concerned may not be sufficient, so I wouldn’t feel comfortable trying to comply with that just yet.

              1. 1

                I think this discussion and the linked one are mixing two completely separate points:

                First is whether third-parties who aren’t direct recipients of a work have the right to source code access. I think it’s clear that this was intended in the intent of the license, and the Vizio case is testing the legal merits.

                The second point raised seems to be whether it’s legitimate under AGPL to make the offer of source “outside” the native protocol of the work when it doesn’t have an obvious facility to do so “internally.” It seems obvious to me personally that shoehorning via such a protocol extension would explicitly violate the “through some standard or customary means of facilitating copying of software” clause, and clearly not a requirement that was intended… but others seem to interpret that otherwise, and I don’t know of any cases that have tested that.

    14. 40

      Great news except the new logo.

      1. 27

        I’m also not super keen on the crappy looking AI generated “MacBook” branding on the site. Branding isn’t everything, but this feels like a firm departure from the “indie” feel of their original site.

        edit: not sure crappy is the best word - I don’t want to come across as too harsh

        1. 13

          Exactly my feeling. Really liked the old logo which conveyed a sens of “light and not too serious firefox”.

          Also like the “we spend the minimal possible time on our website” vibe (kudo to OpenBSD for that)

          1. 14

            Honestly I’m not even convinced that it’s about not spending time on the website as much as just keeping the website from looking too corporate (be that startup or big firm). That said, I wish them luck with this nonetheless.

            A project can have maybe a little strange branding, but what ultimately matters is the end product and the values.

          2. 12

            the “we spend the minimal possible time on our website” vibe

            They explicitly called attention to that on their old website, even! “This page is not fancy because we are focusing on building the browser. :^)”

            1. 8

              This is exactly my thinking! It kind of feels like they “high techified” what was a pretty sweet theme from before. The bird especially is nice, because it’s something that other “brands” don’t really have.

            2. 4

              The cynic might say it isn’t fancy because this is all our web browser can render.

          3. 3

            Also like the “we spend the minimal possible time on our website” vibe (kudo to OpenBSD for that)

            You can do that w/o looking like it was last updated in 1997.

            1. 5

              can you? updating it after 1997 takes more time than not updating it in that period.

              1. 1

                The current release is OpenBSD 7.5, released April 5, 2024. This is the 56th release.

                Somehow they do update it

                1. 1

                  oh yeah

      2. 20

        I wasn’t familiar with the old logo, but then someone linked it on Hacker News.

        https://web.archive.org/web/20240630172605/https://ladybird.dev/

        https://news.ycombinator.com/item?id=40845951

        And several people noted that the new one looks like a the Meta logo or Apple AI.

        Designing our new company brand: Meta - https://design.facebook.com/stories/designing-our-new-company-brand-meta/

        Apple AI logo is intended to look unthreatening, and non-anthropomorphic - https://9to5mac.com/2024/06/17/apple-ai-logo/

        Normally I’m puzzled when people say “this word reminds me of this other bad word” or “this looks like that” (my brain doesn’t really work like that), but in this case, I have to say that it does have the same feeling.

        I hate to be a peanut gallery person, but I really hope they reconsider the logo!

        The new logo doesn’t connote “ladybird” or “ladybug” at all … it seems like there’s obvious room to make it more distinct with that kind of association

        1. 15

          Personally, I think the one at https://ladybird.dev/ladybirb.png is fantastic.

          The new one has absolutely no personality and is boring as can be.

          1. 4

            This one is AI generated and it has that distinct AI shading/texture to it, but it’s a great design and I’d love a human redraw of it

        2. 4

          The new logo doesn’t connote “ladybird” or “ladybug” at all … it seems like there’s obvious room to make it more distinct with that kind of association

          I also prefer the old logo, but I disagree that it “doesn’t connote ‘ladybird’ or ‘ladybug’ at all” – it does look to me like a highly abstract rendering of a ladybug in flight.

        3. 1

          I kinda like the old logo!

      3. 12

        Ladybird really seems like a project I’d like to bet on. It revives a feeling that we really could have nice things. That there could be an oasis of respect in a desert of abusive dark patterns.

        So why should a tiny logo distract me from that? It irks me, enough to write this stupid comment, why? I think it creates a bit of dissonance in my mind. It looks a little bit too familiar. It looks like these sleek, polished things. It looks like a lot of modern tech, pseudo-professional, just waiting to stab you in the back as soon as you let your guard down.

        It’s not ladybirds fault my mind has been tainted this way, but please reconsider your logo. Have less branding and more personality.

      4. 9

        It’s like a meta Meta logo

      5. 8

        Yeah I usually like being supportive of new projects and their branding, but the old logo was way better.

      6. 3

        Thankfully the old logo is still used on the icon, at least for Mac OS builds as of about half an hour ago:

        https://pasteboard.co/MajbI8TkRkrA.png

      7. 2

        and the bigotry.

        1. 3

          Yeah. :|

      8. 1

        These guys are working on a new browser engine - I don’t even give a heck how the logo looks like.

    15. 1

      Are we just re-inventing distributed databases here?

      1. 9

        No. CRDTs are a family of generic data structures that can withstand network partitions without conflicts (hence the C for Conflict-free). Distributed databases overlap with this space but they’re not the same thing, and I don’t know of many dbs that use CRDTs in practice.

      2. 4

        What happens with your distributed database of choice when there’s a network partition between two of its nodes, same key is modified on both in a conflicting way, and then the network connectivity gets restored?

        1. 3

          CRDT’s assume that you can model and automate conflict resolution for your data-model. Depending on your needs this may be easy or difficult. If you can’t do that modeling for some reason then you can’t really use CRDT.

      3. 3

        If you haven’t already, I recommend taking a look at the local-fist paper and the conf talk linked at the beginning. There’s a rationale behind opting for local (offline first) data storage and then a sort of call to experimentation on server-side components that can sync state across devices. I see this blog post as a little thought experiment in that space.

      4. 2

        I would think of any database as a specific data structure with 0 or more wire protocols and specific semantics associated with partition tolerance, availability, and consistency. This feels like a database without a specific line protocol defined that has chosen AP over C and has a specific conflict resolution defined.

        Do you have an example of an AP over C distributed database without a specific line protocol defined?

        Git or other DVCSs seem similar, but I think they only have manual conflict resolution.

      5. 1

        Exactly my take. I feel I’m missing something here as I don’t see the problem and how an awful dropbox hack could be easier than git or even simple scp.

        1. 6

          I have a couple of devices that can never reach each other over a network, but files happily get shuttled between them by Syncthing as an intermediate laptop moves hither and yon. Sometimes sneakernet is forced upon you by the rules of organisations; sometimes it’s easier not to have to think about another network security boundary

        2. 4

          Git requires the user to manually fix merge conflicts.

          Your scp suggestion boils down to the user has to always copy the data from the last node right before you edit, or you degenerate to document.jakesVersion.final.FINAL.docx style version management.

          In CRDT-based data, there’s no intervention needed from the user. Everything just works. If I’m editing notes on my phone and my laptop, I do not want to muck around with manual copy/push/pull/diff operations. I just want to type my notes, like I do today with my BigCo managed notes app, but ideally without the BigCo.

    16. 25

      I don’t want to imagine the amount of red tape and office politics that are behind someone using his @apple.com email to send a 5 lines patch to a free software project.

      1. 14

        Since Apple has been using FreeBSD sources in their OS’s since MacOS X, you would think Apple would have a fairly stable policy about this by now.

        I would guess, given their track record it’s something like: If you want to upstream it, that’s fine, we don’t really care. Don’t expect any support from management and you better focus on the stuff we care about, which isn’t up-streaming into FreeBSD.

        1. 8

          It’s also worth not underestimating how much these things can vary by department within a single company. These sprawling companies have so many different rules and procedures they might as well be working off the Magna Carta.

          I’ve worked in places where people working in some offices (including the one I was in) required a written approval from an engineering manager for any contribution, rubber-stamped by legal if it was with the company email. Then there were the “better offices”, where you pretty much needed just a one-time, per-project approval from a team lead. The point was largely to avoid things like accidentally contributing to a competitor’s project, or to potentially controversial projects and so on. And then there were departments that employed people who were FOSS maintainers or contributors, who obviously required no written anything, since writing FOSS code was literally in their job description.

          The exact amount of red tape depended on the amount of common sense and, unfortunately, yep, on office politics in your branch of the corporate tree.

          I made some trivial fixes to WindowMaker when I was working in one of these places, years ago, through my personal email, both to avoid legal having to weigh in and taking forever and because obviously nobody there cared about WindowMaker. The red tape was basically my manager (super smart fellow) going oh wow cool I used that years ago, too, I had no idea people are still maintaining it, have fun. Most folks in the other departments in that same office were, to the best of my knowledge, never able to upstream anything: FOSS contributions were an yearly review item, so only some, erm, particular people, ever got the approval for it.

          1. 2

            I agree with you, but I imagine the core OSX devs that hack on the FreeBSD portions of the code base are probably mostly one department. I’m sure there is some overlap here and there though.

            1. 1

              I don’t know how Apple is organised internally but there’s surprising potential for organisation schizophrenia if (what the company views as) the core of the OS is wide enough. I was, at one point, part of a “core” team (not at Apple), and core responsibilities were spread among 6 departments, in 4 geographical regions; reporting lines for 2 of them met, IIRC, at the VP level. It worked exactly as well as you’d imagine :-D.

              I’m with you in that you’d think this should be easy, but cover-my-ass precautions and office politics introduce uncanny levels of variation.

              1. 1

                OMG that sounds horrible. LOL

        2. 6

          Since Apple has been using FreeBSD sources in their OS’s since MacOS X

          Next has been using BSD code NeXTSTEP since the mid 80s.

          1. 8

            OPENSTEP used code from 4BSD. With OS X, the source shifted to FreeBSD. A lot of journalists at the time were confused by the fact that FreeBSD 5 and 4BSD were not just one version apart.

            I’m not sure if NeXT ever upstreamed changes to BSD, doing so was a lot more complex than it is for the derivatives with public revision control systems.

    17. 10

      The other problem mentioned is Snap and Flatpak. The author says “Community packages like Snaps, Flatpak or the one proposed by your GNU/Linux distributions can be unstable and introduce bugs (if not sometimes remove features altogether).” So Krita (the painting app used by the author) is broken in Snap and Flatpak. My own personal experience is that every Snap or Flatpak I’ve tried has been broken. I also had an early experience of Snap where somebody built my curv program as a snap, and it was broken, and it was impossible for us to work together and fix it, due to the sandboxing.

      1. 3

        every Snap or Flatpak I’ve tried has been broken

        I am intrigued and would like to know more.

        Do you mean of your software, or in general?

        I don’t use either of them if I can help it, but I have not seen much actual breakage. I’ve seen minor issues like browsers being unable to open local files – which is intentional – or packages being unable to talk to local programs (e.g. the GNOME extensions website), which was a side-effect of security sandboxing and is now fixed AIUI.

        I write about these packaging formats quite a lot and I’d really like to know more about failure modes.

        1. 4

          The Firefox (and now Chromium) snaps distributed with Ubuntu 22.04 have been broken for months, when using Cinnamon or some other non-Unity desktop. Keyboard input simply doesn’t work. Mouse works fine, iirc. Bug has been reported. It’s obviously a misconfiguration in dbus or some other kind of event system, when you run it from a terminal and mash keys enough it starts printing out errors about an event queue overflowing because it’s never getting read. But it’s still not fixed, and every single time I run apt upgrade it removes the higher-priority Mozilla-provided .deb I installed and switches back to the snap. Finally had to hold a specific version to get it to stop doing that.

          This is the main current reason why Ubuntu Is Not My Favorite Distro.

          I’ve had more luck with community-provided Flatpak’s of various games: FF14 and Space Station 14 (no relation) have been treating me well. I think for FF14 the flatpak is technically just a community-made launcher that pretends to be the official one and sets everything up to run the actual game in Wine, but it does the job well.

          1. 12

            There’s a reason why every single newbie manual about Ubuntu now starts with “First step: how to remove snap”.

            1. Software don’t work or break in unexpected way (which makes debugging a lot harder)
            2. Software are a lot slower
            3. Snap makes a mess of your system (mounting virtual partition for every app, making “df” useless)
            4. Snap breaks deb packages
            5. Snap is a daemon consuming a lot of battery and making everything slower (this is so significant that you can feel it as soon as you removed snap)
            6. Snap is proprietary on the server side.

            The only reason for snap is for Canonical to try to transform Ubuntu into a vendor-lockin platform.

            I don’t really like flatpak but, at least, it is sanely built (it solves problems 3 to 6). Flatpak also makes a lot of sense when needing to install a proprietary application. Need Chrome for a specific task ? flatpak install it, do your thing, flatpak remove it and, voilà, you know that your system has not been touched (well, you need to clean up some hidden folders such as .cache/flatpak but this is not as bad as snap)

            Also see:

            https://ploum.net/2022-04-05-firefox-ubuntu.html

            1. 3

              mounting virtual partition for every app, making “df” useless

              Try narrowing down output to specific filesystems, eg. df -t ext4,zfs

              1. 2

                Cool, but ‘df -h works everywhere, except when snaps are invokved. In which case I have to remember to filter for only filesystems in use on this machine. What are those again?’ Is definitely worse.

                1. 1

                  Just hinted how to make df not “useless” but you seem to want to have a problem.

                  1. 3

                    I do not have a problem, I don’t use snap. Snap creates a problem that has a workaround, but a problem with a workaround is worse than no problem.

            2. 0

              This is dramatically overstated.

              I’ve been actively involved in the Ubuntu community since 4.10 came out twenty years ago, and while there are always loud angry people on the internet, in real life, Ubuntu users seem to actually like it. I was at the last two Ubuntu Summit events, and there were lots of happy users, people discussing running their own snap stores, running their businesses on snap packages, panels on how to snap package your own apps and so on.

              Whereas I got into one discussion of Wayland and it was full of angry people describing how unusable it was – as a direct reference to our other recent Lobsters discussion.

              Snap is much more open than the haters think, as I’ve written about. It is not proprietary, it is not locked in, and you weaken your own argument to the point that it no longer is believable by claiming this.

              I also totally disagree with your argument about Flatpak.

              This stuff is more driven by preference and habit than you think. What you claim as facts are not. They are your opinions and that’s fine, and you are 100% entitled to them, but it’s very important to know the difference.

              Personally, I dislike both, but of the two, I dislike snap less. It’s easier and cleaner.

              1. 6

                Finding happy Ubuntu users at an Ubuntu summit seems expected and not representative of the general population. Also, this comment addresses none of the criticisms of snap in the post above.

                I too have had zero good experiences with snap and several bad, and it’s currently one of the main reasons I don’t use ubuntu.

                1. 1

                  Firstly, no, I don’t think so, no really. You seem to assume a lot more uniformity than pretty much ever happens in FOSS communities. :-)

                  I encountered lots of people who really dislike both GNOME and Wayland for instance. I also was much amused to meet an OpenBSD fan who only works at Ubuntu because that’s where the money is, which I can sympathise with.

                  I am not judging solely by that event, either. Some of the many Linux communities on Reddit are another source for me. There are lots of distro-hoppers who are never happy, but there are also plenty of people reporting happiness and success with Ubuntu, yes, including snaps.

                  Personally I am not a big fan of immutable distros but I think that they are the direction that the industry looks to be inevitably moving. As such, if I had to use an immutable distro, I’d rather one where I understand how the pieces fit together.

          2. 1

            I will test this.

            However, I dispute your claim. I have reviewed the following remixes recently:

            Lubuntu and Kubuntu: https://www.theregister.com/2024/04/19/qt_ubuntu_2404_betas/

            Xubuntu: https://www.theregister.com/2024/04/30/xubuntu_2404_snapless_ubuntu/

            And I am running Ubuntu Unity 24.04.

            Firefox works perfectly in all of them.

            1. 2

              The Firefox (and now Chromium) snaps distributed with Ubuntu 22.04 have been broken for months, when using Cinnamon or some other non-Unity desktop.

              You appear to have reviewed several unrelated remixes, including one marketed in the URL as “snapless” and which says:

              You can install Mozilla’s native Debian-packaged Firefox directly – the project has full instructions.

              And you speak of running Unity on 24.04, rather than not-Unity on 22.04.

              1. 4

                But that said: I just tried Firefox on Cinnamon on 22.04 right now, and it did receive keyboard input! (Checked it was in fact a snap running, too; certainly slowly …) I guess they fixed that.

                edit: closing Firefox did cause it to crash and pop up the crash reporter dialog though, lol.

              2. 1

                You appear to have reviewed several unrelated remixes,

                Er, no. I linked to writeups covering three official logo-bearing Ubuntu remixes: Lubuntu, Kubuntu, and Xubuntu.

                Be careful not to confuse the Xubuntu minimal installation with being a separate remix. It isn’t. I think all the official non-function-specific remixes now include a minimal installation option, but I have not verified this.

                Xubuntu has long has a “Core” installation which gives a desktop and almost nothing else, not even a GUI text editor.

                https://xubuntu.org/news/introducing-xubuntu-core/

                <- note the date.

                It seems to me that as of 24.04, Xubuntu has adopted a “minimal” installation mode – now the default in mainstream Ubuntu – but unlike the other remixes, the Xubuntu devs have done this by adapting its existing “Core” installation to the “Minimal” option in the installer.

                Stock full Xubuntu contains the Firefox snap. All remixes contain Firefox as a snap: it’s part of the rules.

                Xubuntu Minimal sidesteps this by not including any browser at all.

                And when I wrote that “the project has full instructions” that means the Mozilla Firefox .deb package project, not Xubuntu.

                1. 1

                  All I meant was, icefox was talking just about Ubuntu 22.04. You reviewed not-Ubuntu not-22.04, which is what I meant by ‘unrelated’ — not unrelated to each other, unrelated to what was being discussed.

                  Anyway, glad they all work.

                  1. 2

                    Oh, I am sorry – my bad. I did indeed misread the version number. I misunderstood and assumed you had to be talking about the latest release.

                    However, the reason I said that is that I have tested and run every single release of Ubuntu there has ever been, and I have never seen a problem that severe make it to release. The worst I saw was when I found that keyboard menu controls in LibreOffice had disappeared under Unity. I reported it and they nearly held the release but decided it wasn’t that important after all.

                    I think, IIRC, Libreoffice’s menu bar was not controllable with the keyboard for years thereafter. As someone who mainly drives Windows and Linux by the keyboard and not by a pointing device, keyboard UI has felt like a neglected 2nd class citizen for years now. Decades, even.

                    But the Firefox Snap worked on every remix on 22.04. I tested them all – in so doing repeating a group test I did nearly a decade before in 2013.

                    I am meticulous and thorough about this stuff to the best extent I can be.

                    I am not saying you did not have problems. What I am saying is that whatever issues you may have had, they were not universal.

                    1. 2

                      I didn’t have any problems, we’re confusing me and icefox here. I’m pleased to see you’ve tested so thoroughly! I don’t mean to imply your work was bad or not thorough in any way. But while they may not have been universal issues, they certainly weren’t isolated.

        2. 3

          I mean, the Steam snap on the Ubuntu app list just straight up doesn’t work with a lot of Proton-supported games. Brotato, for example, doesn’t work in the Snap, but does work when you run it via a .deb installed Steam.

          1. 1

            I do have a Steam account and a few games, but I haven’t used it in 10 years or something. I am not a gamer. So I probably can’t try that for myself.

            1. 2

              I mean, it’s not “I get 3 hours into the game and then it soft-locks”, it’s “the game won’t launch, and the actual fix is not obvious”. I only tried installing steam from a .deb instead of the snap because I saw an article (linked on Cohost, I -think-?) that the Steam Snap was especially bad.

        3. 2

          I mean in general, but I haven’t tried a large number of snaps and flatpaks, because the experience has been discouraging. That means I’m not an expert. Since I only try a flatpak as a last resort, if there is no Debian package or appimage from the developer, that means I’m running flatpaks for niche applications, which may (I’m not sure) have a higher probability of being broken?

          By “broken” I mean any kind of paper cut that degrades the experience or makes it worse than running a native binary, so being unable to open files or run extensions would count as “broken” for me. I don’t care that sandboxing is supposed to intentionally “break” the application, that’s not a feature for me. I just want to use my computer and I want apps to behave as intended by their developers and work as documented. TBC I would describe “minor issues” as “broken” in a minor way.

      2. 2

        To be fair they also claim the distro package is broken. I think probably they just like to run very fresh and unmodified upstream because they work so closely with that project in particular.