Threads for Brekkjern

    1. 9

      My issue with windows is the lack of polish on obvious stuff and the annoying advertising that I get on my pro sku (talking at you latest gamepass notification)

      I have four computers, a windows workstation I use for development against a Linux workstation that hosts my application. A MacBook Air that I use for travel.

      MacOS on base trim (8gb) M2 air: hit command space, start typing. Results take a while to show up but it gets there. Not ideal.

      Windows 11 on 14700 (high end) cpu with 32GB RAM: hit windows key, start typing. it chugs and drops the first 5 keystrokes. Absolutely infuriating

      I’m so fucking over it. It’s a million things like that. I’ve got my dev env working stable on a $900 laptop that feels better than my $2000 PC

      I said I had four computers: the fourth? M4 MacBook Pro.

      1. 5

        On the other hand, I’d probably pay solid money for a proper Mac equivalent to everything. Speed is incredible, compared to Spotlight.

        1. 4

          I miss something like this while on Linux too.

          1. 3

            I think at least part of that comes from special features of Windows’ NTFS (the USN journal change log). I’m aware that MacOS file systems are a bit stale, to be polite, but surprised that the plethora of Linux fs development doesn’t include something suitable.

            1. 3

              I’ve never heard APFS called stale! Maybe you’re still thinking of HFS+? (Which, yes, was quite stale.)

            2. 1

              Propmpted by this exchange I have found this https://github.com/linuxdeepin/deepin-anything as a dependency of https://github.com/linuxdeepin/dde-file-manager and https://github.com/linuxdeepin/dde-grand-search

              but frankly I couldn’t understand how it works together.

              1. 1

                Ah, interesting. It seems to provide a kernel module to get immediate access to file system updates. That makes it independent from file systems, but of course is quite a bit more work.

                I need to test that if it works closer to Everything that e.g. FSearch (which is good, don’t get me wrong, but not as immediate)

      2. 3

        Windows 11, with all of its ads, suggestions, popups, etc., has become so embarrassingly unprofessional that it’s hard for me to understand how Microsoft leadership seems to be okay with it.

      3. 1

        I’m not here to defend windows in any way but win key -> type 3 letters -> hit enter -> app starts is pretty much instant for me on mu 2019 ryzen box. There’s something wrong with your installation. That Windows used to end up in this place (and apparently sometimes still does) is the problem.

        I don’t think I ever reinstalled Win10 on that 5yr, 3 mo old machine that I use for games and daily browsing. I don’t love it and plan to try to switch to Linux with the next hardware buy, but it’s not terrible in general.

        1. 1

          I have had the case of dropped first few keystrokes, but usually only for the first start menu invocation after a login. And I think it’s better since a patch or two because I don’t consciously remember it happening recently. Further uses seem fine. Also disabling in-start Bing search is required for that to feel OK. If you don’t disable bing it’ll get higher match priority over local results and upset people like me.

      4. 1

        I’m fairly happy with the overall experience of using Windows. I cannot remember how I configured it, but I don’t get the ads people talk about, or have random recommendations pop up. Like, overall I think the newer versions have better designed user experience. That being said, you highlight a problem. That user experience is hampered by absolutely terrible implementations of its components, and it’s small things like the delay in accepting input and such. Hell, I even have another example of it. When your Win11 computer is locked, waking it by pressing a key will have that lock screen slide up, but while it’s doing that it won’t accept keyboard input for the password field. My install on one of my computers has even taken to resetting the animation so I have to do it twice. The reason I say the user experience design is fine is that, well, given these things functioned correctly it would be stellar, but they don’t so they end up becoming infuriating to work with.

        It’s the delays in time to interaction, the inconsistent GUIs, the random changes to functionality, tone deaf promotion of their services… Stuff like that is what hampers it from being a great OS. It might not be the right one for you, but assuming they would deal with these issues, you could at least argue against it on what you need rather than how poorly made it is…

        1. 1

          I agree, and I’m also confused by this, I have a stock UK install, zero “ads” or anything, no debloater (they remove things I actually use, so I never bother) and I’m not into heavily customising so I just keep things default aside from using NuShell to match my Mac environment.

          One of the reasons I do prefer windows for a lot of multi-tasking work is the window management is so snappy and fast, compared to the silly animations of mac which you can “disable” which just replaces them with equally slow fade animations. Frustrating when in flow!

    2. 6

      Interesting. Very surprised to see only 2 Ethernet ports though. One for connection to the provider, only one remaining for everything else.

      1. 6

        Turns out LWN review mentions this! From this FAQ

        Q: Why are there only two ethernet ports? A: We didn’t want to impose additional complexity and costs by including an external managed switch IC. One port is 1GBit/s capable, while the other features a speed up to 2.5GBit/s. This is a limitation of the chosen SoC.

        1. 2

          What is the point in having only one 2.5gbit port?

          1. 4

            Higher speed to the local network if you are routing between VLANs. Requires a 2.5gbit VLAN capable switch though.

            1. 5

              Except that they labelled the 2.5gbit port WAN, and the 1G port LAN… But the device also does wifi, and that can go above 1G, so maybe for >1G fiber service, with some hosts connected to wifi. (Only I don’t know if the cpu is also connected to the 2.5G interface, or just over a 1Gbit port. If it had 2 2.5G ports, it would be an instant buy for me (or some usb-3 ports)

              1. 2

                That, plus the 2.5Gbe port could be used for a router on a stick configuration (where you hook up WAN to a switch and use VLAN tagging to make it go to/from the router through the WAN port).

    3. 4

      Analyzing Data170,000x Faster by removing the Python

      1. 26

        The end user is still writing Python.

        And honestly, “learn some profiling and optimization tricks” – which is the sort of thing a non-programmer might pick up from going to a data science conference or even by word-of-mouth from a more experienced colleague – is vastly preferable to the original article’s “rewrite in Rust”, given that Rust has a notoriously brutal learning curve even for experienced professional programmers.

        1. 11

          Oh I am absolutely in the wrong in the above comment - I see that I clicked on the article, clicked through to the article it was referencing where they replaced it all with rust, and forgot that that wasn’t this article. This article is indeed v interesting and I enjoyed it.

        2. 9

          Somewhat of a quibble, this isn’t really Python. Numba uses Python syntax, but it’s only a relatively small subset of the language and it’s semantically much more restrictive in many ways (e.g. strongly typed). So you will typically get some annoying errors as you write it when you hit the edges of what it can do. And Rust is a general purpose language while Numba is very domain-specific.

          I do strongly agree with the second paragraph though. I’m writing a book on speeding up Python data science/scientific computing with low-level code, and it’s going to use Numba throughout because it’s so nice from an educational perspective. Previously shared an excerpt based on an early draft of one of the chapters. Rust would involve a whole book just on the Python/scientific computing/integration aspects in addition to reader having to read a different book on learning the base language.

          1. 3

            Somewhat of a quibble, this isn’t really Python. Numba uses Python syntax, but it’s only a relatively small subset of the language and it’s semantically much more restrictive in many ways (e.g. strongly typed). So you will typically get some annoying errors as you write it when you hit the edges of what it can do. And Rust is a general purpose language while Numba is very domain-specific.

            A subset of Python is still Python. All valid Python programs are by definition a subset of Python.

            1. 4

              The semantics are also different at the edges, e.g. there’s no bigints which matters if you overflow integer addition.

            2. 3

              Numba ain’t Python; extension modules are almost never Python. I suppose that somebody needs to write this article a third time, using PyPy…

            3. 1

              I mostly agree to the extent it’s a proper subset; you could see it as an optimizing compiler that only optimizes a subset of the language. There are a few gotchas where the semantics are actually different from Python semantics though.

    4. 16

      And if you need to use different SSH keys for different user accounts you can add this to the included file:

      [core]
        sshCommand = "ssh -i <keyfile>"
      
      1. 9

        This can also go under .ssh/config.

        Host gitrub.com
          User git
          HostName gitrub.com
          IdentityFile ~/.ssh/gitrub_private_key
          Controlmaster auto
          Controlpath ~/.ssh/ssh-%r@%h:%p.sock
        
    5. 13

      It is both scary and funny that the biggest commercial operating system requires scary hacks (such as code injection, or writing temporary JavaScript scripts) just to allow you to delete a file (which happens to currently be executed).

      In Linux and macOS, you are free to delete files. Even if they are open or being executed. It’s as simple as that. No hacks required!

      But, an even bigger issue is that something such as an uninstaller even exists. The fact that you need to not only write your software, but every software that you want to release you need a separate software to install and uninstall it. That is crazy! Even though they are not perfect, Linux’s package managers are amazing at solving that problem. MacOS is arguably even easier, you literally just copy a .app file into Applications and it’s there, and you delete it and it’s gone! ✨Magic✨

      </rant>

      1. 8

        MacOS is arguably even easier, you literally just copy a .app file into Applications and it’s there, and you delete it and it’s gone! ✨Magic✨

        I’ve never been convinced this really worked right when the app will still leave things like launchd plists around that it automatically created…

        1. 3

          True, I have experienced that as well. It is not very common thankfully.

          Also, some applications do require installers even on macOS. An example (shame on you!) is Microsoft Office for Mac. At least those are standardized, but it is annoying. I will not install software that requires an installer on any of my systems.

      2. 4

        Windows has the technology. it’s called “Windows Installer” and it’s built into the OS. However it required using a MSI file, which people don’t like because of the complex tooling.

        More recently there is msix which simplifies things greatly while having more features but people don’t like it because it requires signing.

        1. 6

          Kind of. The root problem here is that you cannot, with the Windows filesystem abstractions, remove an open file. With UNIX semantics, a file is deleted on disk after the link count drops to zero and the number of open file descriptors to it drops to zero.

          This is mildly annoying for uninstallation because an uninstalled can’t uninstall itself. The traditional hack for this was to use a script interpreter (cmd.exe was fine) that read the script and then executed it. This sidesteps the problem by running the uninstaller in a process that was not part of the thing being installed. MSIs formalise this hack by providing the uninstall process as a thing that consumes a declarative description.

          It’s far more problematic for updates. On *NIX, if you want to replace a system library (e.g. libc.so), you install the new one alongside the old then rename it over the top. The rename is atomic (if power goes out, either the new version will be on disk or the old one) and any running processes keep executing the old one, new processes will load the new one. You probably want to reboot at this point to ensure that everything (from init on down) is using the new version, but if you don’t then the old file remains on disk until the open count drops to zero. You can update an application while it’s running then restart it and get the new version.

          On Windows, this is not possible. You have to drop to a mode where nothing is using the library, then do the update (ideally with the same kind of atomic rename). This is why most Windows updates require at least one reboot: they drop to something equivalent to single user mode on *NIX, replace the system files, then continue the boot (or reboot). Sometimes the updates require multiple reboots because part of the process depends on being able to run old or new versions. This is a big part of the reason that I wasted hours using Windows over the last few years, arriving at work and discovering that I needed to reboot and wait 20 minutes for updates to install (my work machine was only a 10-core Xeon with an NVMe disk, so underpowered for Windows Update), whereas other systems can do most of the update in the background.

          1. 3

            This is mildly annoying for uninstallation because an uninstalled can’t uninstall itself

            I think this is only half-true, because WinAPI gives you the “delay removal until next reboot” (MOVEFILE_DELAY_UNTIL_REBOOT), so it should be possible for the uninstaller to uninstall the application, and then register itself, along with its directory, for removal until next reboot. Then Windows itself will remove the uninstaller on next reboot.

            On servers this could mean that it will be removed next month, but this in turn is a virtual problem, not a real one.

            1. 1

              Windows servers list “application maintenance” as a reason for a reboot, so it’s not culturally weird to reboot after an application update.

          2. 2

            MSIs formalise this hack by providing the uninstall process as a thing that consumes a declarative description.

            Yep, that was my point. Or to put it another way, Windows can handle the management of a package so you don’t have to. Which was the complaint in the OP.

            But on your point, it is totally possible to do in-place updates to user software. On modern Windows most files can be deleted even without waiting for all handles to close. And any executables you can’t immediately delete (due to being run) can be moved. The problem is software that holds file access locks. Unfortunately standard libraries are especially guilty of doing this by default, even newer ones like Golang do this for some inexplicable reason.

        2. 2

          True, arguably Windows also has an app store nowadays and NuGet and WinGet. I did not know about msix! Maybe a bit of an XKCD 927 situation there.

          1. 4

            Windows also has:

            So the existence of installers/uninstallers is a “cultural” thing, not a technical necessity.

            1. 1

              “if you want to use our product, install [my chosen package manager]” is pretty non viable. I write the installer for a game, none of that would be an option.

              1. 3

                Sure you do. You just call it “Steam” instead.

          2. 2

            WinGet simply downloads installer programs and runs them. This is visible in its package declarations

            NuGet is a .Net platform development package manager right? Like Maven for the JVM it is not intended to distribute finished programs but libraries that can be used to build a program. But perhaps it can be used to distribute full programs just like pip, npm et al.

            1. 2

              In theory, NuGet is not specific to .NET. You can build NuGet packages from native code. Unfortunately, it doesn’t have good platform or architecture abstractions and so it’s not very useful on non-Windows platforms for anything other than pure .NET code.

    6. 2

      I just truncate the message to 1,024 characters and process it anyway (assuming it wasn’t so large that it hit the web server’s request message size limit). The user still receives something, and if they care that it’s truncated, they can investigate why.

      If this guy thinks it’s anything but completely fucked up to deliberately drop customer data and make them “investigate why” then I won’t ever touch one of his APIs.

      1. 16

        You might want to check the about page to see who created lobster.rs before saying you’ll not use his stuff 🙃

      2. 8

        This entirely depends on the service though, and they are describing a service for push notifications. I would say that delivering a truncated message is better than not delivering it at all in that case. If the data is supposed to be stored long term then it’s another matter.

      3. 2

        He could be hardline and 400 all requests that don’t conform, and I bet that was the case, originally.

        I’m curious what you would do? How would you balance correctness, support concerns / customer happiness, operational concerns to protect the service (eg rate limiting of legitimately bad/abusive requests), and all the other things as a single person providing this service?

        1. 3

          I’m curious what you would do? How would you balance correctness, support concerns / customer happiness, operational concerns to protect the service (eg rate limiting of legitimately bad/abusive requests), and all the other things as a single person providing this service?

          It all depends on what the API is doing.

          It looks like the messages in question are basically status updates delivered to end-user devices. I guess it’s unlikely that the message consumers will be doing anything other than printing them to a screen, and it’s entirely possible that message producers wouldn’t know about the size limit for a single recipient. In this (specific) case, truncating the message is at least arguably beneficial vs. rejecting it outright.

          But this feels to me like an exceptional use case. If these messages were expected to be machine-readable, or any of a hundred other variables were different, then truncating them would make them unreliable, likely useless. As you note, you usually want to reject bad requests (with a 400 or whatever) by default, and carve out exceptions based on use case.

          edit: basically a +1 to Brekkjern’s sibling comment

          1. 1

            I drew the same conclusion as you and the sibling. I want to know what @caboteria thinks since they are hardline against it. I am assuming they didn’t think much about the nuance of the problem and stopped at “truncate” before aggressively stomping their foot and writing a mean spirited comment void of any substance.

            1. 2

              TBH, I wasn’t familiar with the Pushover service so when I read that data gets truncated I recoiled a bit. @caboteria might have been similarly ignorant. That section might be a bit better with some context on the service.

    7. 8

      A frustrating thing about Bard is that, while it’s able to run searches and incorporate results from those searches into its output, it doesn’t tell you when it’s doing that.

      This differs from Bing and ChatGPT Browse, both of which can run searches but will indicate in their output when they have done so.

      So for this particular example it’s entirely opaque as to whether Bard was constructing an answer based on text that had been trained into it, or if it ran a search against the Google index and incorporated the copied text that way.

      My hunch is the latter, but there’s no way of being sure either way.

      1. 4

        Bing Chat tells you what it’s searching for and it shows you links, but it hallucinates like mad in between. I asked it to tell me why I’d use CHERIoT instead of a PMP (meaning the RISC-V physical memory protection unit). The text that it generated was accurate (though largely lifted from something I’d written in a tech report), but it then gave three citations that all talked about Project Management Professionals and had absolutely no relevance to the answer. It confidently added these as citations for each of the points that it made. It’s not processing the search results into some conceptual model of the world and applying that, it’s adding them to a token prediction model and generating something where the structure of the results of the search text affect the probabilities in the output. Whether that generates something meaningful depends as much on whether the search results have similar syntactic structure as it does on their contents.

      2. 2

        I don’t like that. Why have they done that.

        1. 3

          Because it will generally improve the product, and that is all they really care about.

    8. 40

      If all the author needed was a blog, maybe the problem is that his tech stack is way too big for his need? A bunch of generated HTML file behind a Nginx server would not have required this amount of maintenance work.

      Is the caching of image at the edge really necessary? So what if it take a little while to load them. Just by not having to load a front end framework and making 10 API call before anything is displayed, the site will already load faster than many popular site.

      If the whole point is to have fun and learn stuff, the busy work is the very point of course. Yet all this seems to be the very definition of non value added work.

      1. 13

        At the end he says

        I know that I could put this burden down. I have a mentor making excellent and sober use of Squarespace for his professional domain - and the results look great. I read his blog posts myself and think that they look good! It doesn’t have to be like this. […]

        And that’s exactly why I do it. It’s one of the best projects I’ve ever created.

        So I think the whole point is to have fun and learn stuff.

        1. 7

          Inventing your own static site generator is also a lot of fun. And because all the hard work is done outside the serving path, there’s much less production maintenance needs.

          1. 14

            Different people find different things fun

          2. 1

            IMO if you do it right, inventing your own static site generator is only fun for about half a day tops. Because it only takes a couple hours. :)

            1. 2

              Not if you decide to write your own CommonMark compliant Markdown parser :]

              1. 1

                I’ve been seriously considering dropping Markdown and just transforming HTML into HTML by defining custom tags. Or finally learning XSLT and using that, and exposing stuff like transforming LaTeX math into MathML via custom functions.

      2. 9
        • Node.js or package.json or Vue.js or Nuxt.js issues or Ubuntu C library issues
        • CVEs that force my to bump some obscure dependency past the last version that works in my current setup
        • Debugging and customizing pre-built CSS frameworks

        All of these can be done away with.

        I understand that the point may be to explore new tech with a purposefully over-engineered solution, but if the point is learning, surely the “lesson learned” should be that this kind of tech has real downsides, for the reasons the author points out and more. Dependencies, especially in the web ecosystem, are often expensive, much more so than you would think. Don’t use them unless you have to.

        Static html and simple CSS are not just the preference of grumpy devs set in their ways. They really are easier to maintain.

      3. 5

        There’s several schools of thought with regards to website optimization. One of them is that if images load quickly, you have a much lower bounce-rate (or people that run away screaming), meaning that you get more readers. Based on the stack the article describes, it does seem a little much, but he’s able to justify it. A lot of personal sites are really passion projects that won’t really work when scaled to normal production workloads, but that’s fine.

        I kinda treat my website and its supporting infrastructure the same way, a lot of it is really there to help me explore the problem spaces involved. I chose to use Rust for my website, and that seems to have a lot less ecosystem churn/toil than the frontend ecosystem does. I only really have to fix things when bumping packages about once per quarter, and that’s usually about when I’m going to be improving the site anyways.

        There is a happy medium to be found, but if they wanna do some dumb shit to see how things work in practice, more power to them.

      4. 4

        A bunch of generated HTML file behind a Nginx server would not have required this amount of maintenance work.

        Sometimes we need a tiny bit more flexibility than that. To this day I don’t know how to enable content negotiation with Nginx like I used to do with Apache. Say I have two files, my_article.fr.html, and my_article.en.html. I want to serve them under https://example.com/my_article, English by default, French if the user’s browser prefers it over English. How do I do that? Right now short of falling back to Apache I’m genuinely considering writing my own web server (though I don’t really want to, because of TLS).

        This is the only complication I would like to address, it seems pretty basic (surely there are lots of multilingual web site out there), and I would have guessed the original dev, not being American, would have thought of linguistic issues. Haven’t they, or did I missed something?

        1. 4

          Automatic content negotiation sucks though? It’s fine as a default first run behavior, but as someone who lived in Japan and often used the school computers, you really, really need there to also be a button on the site to explicitly pick your language instead of just assuming that the browser already knows your preference. At that point, you can probably just put some JS on a static page and have it store the language preference in localStorage or something.

          1. 1

            There’s a way to bypass it: in addition to

            https://example.com/my_article
            

            Also serve

            https://example.com/my_article.en
            https://example.com/my_article.fr
            

            And generate a bit of HTML boilerplate to let the user access the one they want. And perhaps remember their last choice in a cookie. (I would like to avoid JavaScript as much as possible.)

            1. 1

              If JS isn’t a deal breaker, you can make my_article a blank page that JS redirects to a language specific page. You can use <noscript> to have it reveal links to those pages for people with JS turned off.

          2. 1

            Browsers have had multiple user profiles with different settings available, for more than a decade now (in the case of Firefox I distinctly remember there being a profile chooser box on startup in 2001–2).

            1. 2

              Which is fine if you can actually make a profile to suit your needs. If you cannot make a profile, you are stuck with whatever settings the browser has, and you get gibberish in response as you might not understand the local language.

              1. 1

                Look, the browser is a user agent. It’s supposed to work for the user and be adaptable to their needs. If there are that many restrictions on it, then you don’t have a viable user agent in the first place and there’s nothing that web standards can do about that.

            2. 1

              The initial release of Firefox was 2004. Did you typo 2011 or mean one of its predecessor browsers?

              1. 2

                Yeah I’m probably thinking of Phoenix.

        2. 3

          There’s no easy way, AFAIK - you either run a Perl server to get redirects or add an extra module (although if you were doing that, I’d add the Lua module which gives you much more freedom to do these kinds of shenanigans.)

        3. 1

          Caddy allows you to match HTTP headers, and you can probably achieve what you want with a bunch of horrible rewrite rules.

          You can always roll your own HTTP server and put it behind Caddy or whatever TLS-capable HTTP server.

        4. 1

          You could put Apache behind Nginx; I’ve done that before, and I might do it again.

          • I prefer nginx for high load; it’s great with static files.
          • apache config for some things - redirects, htaccess, I think? - feels easier.

          It’s been quite a while since I delved in on these.

    9. 4

      I guess this blog post illustrates how I still can’t get used to PowerShell. Author runs winget upgrade and, judging from the output, expects to be able to pipe this to SelectObject Id which fails because winget upgrade doesn’t produce PowerShell compatible output.

      It seems to me that PowerShell users, by seeing a table in the output, automatically assume that there’s some underlying data model, where each row in that table can be converted to (or is?) an object in some way.

      That means that when you run a PowerShell command, it will have invisible output that becomes visible once you pipe it through the right command. Me not knowing this magical command, will therefore not have access to information that is in an object right in front of me. It constantly makes me wonder if I’m missing something important.

      In contrast to the UNIX shell, where you can see everything that comes out of a command, you can always try to make sense of it yourself, even if you’re not familiar with grep, sed, awk, jq or whatever tool makes this easy for you.

      UNIX shell also allows me to intermediate results with the > operator, or just copy/pasting out of the terminal, edit the file with $EDITOR, and feed it to the next command with <. This is great UX and I suspect it would either not work, or have massive gotcha’s in PowerShell.

      1. 7

        I think you’re used to one environment more than the other. Some counterexamples:

        It constantly makes me wonder if I’m missing something important.

        Use Get-Member on an unknown object to learn more https://learn.microsoft.com/en-us/powershell/scripting/samples/viewing-object-structure--get-member-?view=powershell-7.3

        In contrast to the UNIX shell, where you can see everything that comes out of a command

        That’s not always true. istty() and detecting terminal size can play tricks on what you get as the output.

        This is great UX and I suspect it would either not work, or have massive gotcha’s in PowerShell.

        It’s just $foo = ... - you can save the whole object for processing later.

        Nothing stops you from processing the text output from the commands either. You just don’t normally have to do it. And it’s a good idea not to, because trying to reparse random strings will end in frustration one day.

      2. 3

        I’m writing a piece defending PowerShell and plan to address this! Text isn’t as composable as objects are. For example, you can see the timestamps of all your files with ls --full-time but can’t extract them to another program, because all the other text is in the way (cut doesn’t work, I tried). You have to instead write stat -c %w.

        In PowerShell, by contrast, you can just write ls | Select LastWriteTime.

        1. 4

          I would note that the two options aren’t mutually exclusive. You can have text AND objects, by using a serialization format like JSON or TSV.

          There are many projects in that vein:

          https://github.com/kellyjonbrazil/jc ( I think it has a wrapper around ls to do what you want ?)

          https://github.com/oilshell/oil/wiki/Structured-Data-Over-Pipes

          I believe PowerShell is ONLY objects, not text. Because the objects are really data structures in a .NET virtual machine that are passed to other “cmdlets” ?

          That sort of works on Windows, if the .NET VM has bindings to the entire operating system. It doesn’t really work on Unix.

          It’s what I call a “two-tier” design of a shell … some programs are “privileged” cmdlets, while others are “external”.

          I didn’t follow all of the winget argument (never used it), but I think the problem is that it’s external and not a tool that only lives in a .NET VM.

          Even within a single Windows machine, you have native processes, .NET apps, and also WSL processes I believe. Maybe even WSL2. So it gets a bit messy.

          Shell is for gluing together things that weren’t meant to be glued together.

          PowerShell is probably a great tool in its domain, but I’d argue that it’s not fully general glue. This WinGet example seems like evidence of that – you have to “opt in” and “boil the ocean” to interoperate.

        2. 3

          I’m looking forward to read it!

          Most PowerShell guides I’ve seen focus on what you should type, not why; I learn better with the latter. For example, you use Select in your example, while another post used Get-Member for something I’d consider a similar operation. I can get a general gist what the difference is (one operates on a table, one operates on an object) but I find it difficult to train my duck-fu [1] to find the answer.

          find . -maxdepth 1 -printf '%p\t%t\n' should do what you want, but I certainly agree that PowerShell is more readable here.

          [1] Adaptation of google-fu

          1. 1

            Get-Member lists out the properties of whatever type was piped into it, along with any methods or similar.

            Select-Object allows you to select parts of the piped data:
            -First n
            -Last n
            -Skip n
            -Properties prop1, prop2 (This removes all properties except for the ones you list)
            -ExpandProperty prop (Selects a property, and only lets the property through instead of the object itself)

            Where-Object operates on the stream like a WHERE query in SQL.

      3. 2

        This is great UX

        It is not, and has never been, and carries all the well-known problems of intermingling visual representation with data.

        There is no more magic here than there is in the beautiful UX of aptly-named tools such as “awk”, “grep” and “sed”.

    10. 2

      Kind of surprised about a lot of these points, in particular the fact that they have design decisions that seem to fall over given the bimodal nature of discord servers.

      The static buckets for timing and using a database with cheap writes and more expensive reads is like… for most systems you can get away with this (and they are getting away with it for the most part IMO). But given this is their core system, it feels like by now adding a different sort of indexing system for scrollbacks that allows for certain discords to have different buckets seems very important.

      EDIT: honestly it looks like the work is almost there. Bucket sizes being channel dependent seems like an easy win, and maybe you have two bucket fields just built-in so you can have auto-migration to different bucket sizes and re-compact data over time, depending on activity.

      I don’t know about Cassandra’s storage mechanisms, but I do know that a lot of people with multitenant systems with Postgres get bit by how the data is stored on disk (namely, you make a query and you have to end up fetching a lot of stuff on disk that is mostly data you don’t need). It feels so essential for Discord for data to properly be close together as much as possible.

      1. 3

        I’m also surprised that they would bias for writes. My intuition is that chat messages are always written exactly once, and are read at least once (by the writer), usually many times (e.g. average channel membership), and with no upper bound. That would seem to be a better match for a read-biased DB. But I’m probably missing something!

        1. 8

          It’s a great question and the answer isn’t straightforward. Theoretically btree based storage is better than LSM for read heavy workloads but almost all distributed DBs–Cassandra, BigTable, Spanner, CockraochDB–use LSM based storage.

          Some explanations why LSM has been preferred for distributed DBs:

        2. 3

          I’m assuming they are broadcasting the message immediately to everyone online in the channel, and you only read from the database when you either scroll back far enough for the cache to be empty, or when you open a channel you haven’t opened in a while. That would avoid costly reads except for when you need bulk reads from the DB.

          1. 6
          2. 2

            I’d be very surprised if the realtime broadcast of a message represented more than a tiny fraction of its total reads. I’d expect almost all reads to come from a database (or cache) — but who knows!

            1. 2

              It’s a chatroom and people don’t scroll way up super often. They only need to check the last 50 messages in the channel unless the user deliberately wants to see more. There might be a cache to help that? But you can stop caching at a relatively small upper bound. That said I am curious how this interacts with search.

    11. 14

      One good reason for having a low DNS TTL not mentioned in the article are DDNS setups.

      Residential internet connections are flaky and sometimes a router cycles through multiple dynamic IPs in a matter of minutes, without anything the customer can do.

      1. 7

        Yes, that’s true. We really need to get over this shitty dynamic IP for home users. IPv6 to the rescue.

        1. 20

          In practice stability of IPs has nothing to do with IPv4 vs IPv6. Some providers will give you the same IPv4 address for years, others will rotate your IPv6 prefix all the time.

          1. 3

            Yep, anecdote here: AT&T Fiber has given me the same IPv4 address for years, even across multiple of their “gateways” and a plan change.

          2. 2

            Anecdata: this is true in theory but I’m not sure in practice? Specifically, I used to get the same IPv4 address for weeks at a time - basically until my modem was rebooted. Then in 2014 ARIN entered phase 4 of their IPv4 exhaustion plan (triggered by them getting down to their last /8 block) and all of a sudden my modem’s IPv4 address refreshed far, far more often, IIRC every couple days.

            I guess maybe this was not technically required though, and was potentially just my ISP overreacting? 🤷

            CenturyLink in the Seattle area, FWIW.

          3. 1

            At least here in Germany will have a different IPv4 address every 24 hours or on reconnect for far most of residential Internet access. The point with IPv6 is that you not only have one public. But again in Germany it‘s even hard to find a not regularly changing IPv6 prefix for residential Interner access. They think it‘s more Staat protection friendly… like cash. Crazy thoughts.

            1. 5

              Ideally, a v6 provider would give you two subnets, one for inbound connections that remained stable, one for outbound connections that was changed frequently. Combined with the privacy extensions randomising the low 64 bits, this should make IP-based tracking difficult.

            2. 3

              They think it‘s more Staat protection friendly…

              The origin story of the 24h disconnect is that it used to be the differentiator between a leased line and a dial-up line, which belonged in different regulatory regimes (the most obvious aspect to customers has been cost but with a few backend differences, too). The approach has stuck since.

              It’s also a (rather barebone) privacy measure against commercial entities, not the government: the latter can relatively easily obtain a mapping from IP to the address by giving a more-or-less refined reason.

              1. 2

                Commercial entities have “solved” the tracking issue by using cookies etc.

            3. 1

              They think it‘s more Staat protection friendly… like cash. Crazy thoughts.

              I.e. customers prefer changing IP addresses for privacy reasons?

            4. 1

              Which is really weird, because implementing an “I’d like my IP to change / not change” checkbox would be trivial. I don’t get why that’s not more common.

              1. 5

                The checkbox isn’t the complicated part here.

        2. 2

          That’s not going to be an easy problem to solve. We’ve embraced the current pseudo standards within home internet connectivity, maybe static, maybe dynamic IP, asymmetric speeds, CGNAT, no IPv6, etc. for so long that many people think that these are real industry standards with cost structures and cost savings behind them and we must suffer with them if we want cost effective internet connectivity at all. A lot of home ISP customers suffer from Stockholm syndrome.

      2. 2

        I think that DynDNS.org is using a 60s timeout on A records so for a DynDNS/residential setup I think that the article author would approve of something like this:

        foo.mydomain.com 14400 IN CNAME bar.my-dyndns-server.com

        bar.my-dyndns-server.com 60 IN A 192.168.1.100

        Specifically, I don’t think that the original author is complaining about the 60s TTL on records at my-dyndns-server.com since that company has to deal with the lack of caching in their DNS zones. He finds sub 1hr TTLs in the CNAME records to be a problem. And he finds CNAME TTLs shorter than the A record TTLs to be a bigger problem. Honestly even Amazon does this 60s TTL on dynamic resources trick.

    12. 20

      Luckily, Python 3 releases are fairly backwards compatible. …

      A symptom of a bigger problem

      The need to upgrade is not a one-time event, it’s an ongoing requirement: … If you’re still on Python 3.7, that is a symptom you are suffering from an organizational problem

      One could argue that the “bigger problem” here is actually that it’s not possible in-principle to write software in Python today (or for many popular platforms) that will definitely still work on a system with security patches five years from now.

      i.e. If you want to write software that is “finished”, as I often do as a person who wants to start the occasional new project and doesn’t have quadratic time to spend on maintenance, your options are very limited.

      1. 7

        Python sometimes deprecates and removes things even within a major release series, and of course some people dislike that, though I find it’s pretty minor myself.

        But I’d argue that it’s much more of a problem – speaking as a maintainer of open-source code – that some people think they can just call their software “finished”.

        What you’re actually doing in that case is demanding that I (and every other maintainer of something your “finished” software depends on) promise to provide bug fix and security support to you and your users, from now until the heat death of the universe, and also never do feature releases again because your users might be tempted into upgrading and then your software might not be “finished” anymore.

        And… I won’t even quote you a price on that; it’s a complete non-starter. If you want maintenance of your dependencies past the period the upstream provides (and in the case of Python it’s five years for each minor/feature release), you can do it yourself or find someone willing and pay them whatever they charge for the service.

        1. 14

          Of course I’m not expecting that a giant list of dependencies will be perpetually updated for me. But if I write a simple utility with no dependencies other than the language runtime, then yeah, that should be finished. I shouldn’t need to add it to an ever-growing list of “things I maintain now until I die”, and neither should you.

          You don’t owe me any particular kind of maintenance, and I don’t owe you any particular kind of maintenance. This is all gratis; we’re all doing it for fun or because we care about it or because we think it will get us a job.

          For what it’s worth, Rust has solved this problem, as far as I’m concerned, with Rust Editions. It’s not impossible; it doesn’t require an ongoing sisyphean effort performed for free.

          1. 3

            But if I write a simple utility with no dependencies other than the language runtime, then yeah, that should be finished

            Python has a clear support policy: feature releases get five years. If you want to declare your software “finished”, then in at most five years it will no longer be receiving bugfix and security support of the platform it runs on.

            Meanwhile I’m not super optimistic about Rust having solved this “problem”. The language is still extremely young and they’re accumulating a lot of “we promise to keep this working for you forever” stuff that I think is not sustainable over the kinds of time horizons the “finished software” people will want.

            (my own stance is that the only way software can be “finished” is when it’s no longer used by anyone, anywhere, and never will be again)

            1. 4

              the only way software can be “finished” is when it’s no longer used by anyone, anywhere, and never will be again

              This is trivially false if your software is running in an environment where it doesn’t have to care about security. I can run just about any unmodified video game ROM from the 80s on my laptop using an emulator. There’s no inherent difference between a video game emulator and a language implementation - the difference is that one of them is “retrocomputing” and the other is “general-purpose programming”. Rust editions are essentially cutting “retrocomputing releases”. Python could do the same, or someone else could do so (and some have built 2.7 forks, to much gnashing of teeth)

              1. 9

                Note that virtually every C compiler can still compile code from the nineties (that’s thirty years ago) using options like --std=c89. The same is true for Common Lisp, although that’s not an evolving standard like C is. Fortran compilers can compile code from the seventies…

                But for modern languages it’s sort of acceptable to not have backwards compatibility. Of course, backwards compatibility also has a flip side in that it means every bad decision must be kept around forever (although it can be deprecated or even dropped in newer “modes” of course).

                1. 3

                  The aggressive breakage is makework. My tolerance for that varies greatly based on the season I’m in. But, if I’ve marked something finished and it must be updated to support Critical Business Functions, that’s where support contracts come in.

                  (I suppose I’m not quite compatible with open source still.)

              2. 4

                You can run unmodified Python code on your laptop too. You just have to run it in a VM that has a compatible Python interpreter for it to run. That is essentially the same thing you are doing when you run your 80’s ROMs on an emulator. You are handling the maintenance of getting that ROM to work by changing the environment around it to suit it, the same thing you’d have to do with the old Python code.

              3. 3

                In the same vein as another reply: you’re relying on the emulator being maintained, in the same way you’re relying on a specific version of the Python interpreter to be maintained. You’re also relying on the games not having any show-stopping bugs in them, because the original vendor is long past the point of fixing them.

                So no matter how you choose to phrase it, yes, you really are relying on others to do long-term maintenance on your behalf as part of your ’finished” software.

      2. 1

        Trying to think here if your problem needs to be solved at the language level, or at the packaging level. Would you be happy if there was a tool that could package your code + dependencies + any version of the python interpreter into a binary? Then, as long as the code, the packaging tool, and the pinned version of your dependencies were available, anyone could package it from source.

        This only moves the problem, of course, because such a tool doesn’t exist. But other languages have similar problems: I doubt you can run ruby or php from 10 years ago in the latest version with zero hiccups.

        1. 1

          This only moves the problem, of course, because such a tool doesn’t exist

          You can do this easily without any involvement of the language or the packaging level with any virtualization/containerization tool.

          There are also quite a few tools that do more like you suggest of producing a “binary installer” type distribution – off the top of my head: PyOxidizer, PyInstaller, and py2exe are all alternatives in that space, but I know there are some others, too.

      3. 1

        This is why I usually target apps whatever Python interpreter current Ubuntu LTS ships with, so it will be working for a long time for a lot of people.

      4. 1

        The solution is to ship your own python interpreter with your application or to achieve the same effect, containerize it.

    13. 14

      As little as possible.

    14. 6

      Very handy and to the point! I appreciate you writing it.

      My only small quibble is that HVEC is a proprietary codec. MDN’s video codec guide recommends using a WebM container with VP9 for video and Opus for audio, for everyday videos.

      1. 1

        Thanks for your feedback. I’m very interested on your quibble. Have already tried to upload a video with WebM container with VP9 for video and Opus for Audio ? Do you have any recommended FFmpeg command parameters to produce a file like that so that I can try locally and than update my article ?

        1. 3

          I have made quite a few video clips encoded with VP9, but AAC (edit: double checking my files, ffmpeg seems to have transcoded audio to Opus automatically) as audio and my experience is that a lot of devices have shoddy support for VP9. For the ones that support it, it is great, but for the ones where it doesn’t then you will need a fallback to h264. Depends on your use case here.

          Regarding the conversion parameters I often convert h264 encoded videos to VP9 using these settings: ffmpeg -i <input_file> -c:v libvpx-vp9 -crf 18 -b:v 0 <output_file>

          Works fine for me in my case, but I’ve never tried to upload them to mastodon, so… YMMV I guess?

          1. 2

            Or VP8 which will be a bit bigger but has broader performant decoding support.

            1. 1

              This is the reasonable option…

          2. 1

            Thanks for these details from your experience. Do you think you can try to upload a file generated that way on mastodon and see if it works ?

            1. 1

              I don’t have an account, but I can PM you a link to a file you can test with that is encoded in that way that you could test with if you want.

        2. 2

          I have not tried it, actually; I will today, if you’d like. In the process, I’ll figure out the FFmpeg parameters needed.

          1. 1

            Thank you very much, that would be helpful. I’m open to update my article based on the MDN’s recommendations.

            1. 4

              I’ve uploaded a one (1) minute clip to: https://emacs.ch/@carcosa/109365376767283805

              The command-line I used was:

              ffmpeg -i 'Sacrifice of Angels Battle.mp4' -vf scale=1920:-1 -c:v libvpx-vp9 -crf 24 -b:v 2000k -row-mt 1 -c:a libopus -b:a 48K 'Sacrifice of Angels Battle.webm'

              1. 2

                The audio is way out of sync, dunno if that’s your fault or the source material

    15. 1

      Why are these functions and not constants?

      1. 1

        Because they aren’t response codes, but methods to generate those responses.

    16. 6

      Better title: a minor Windows program is a lot slower than it should be, for this user.

      1. 35

        Given his track record of identifying performance problems in Windows apps, and not just minor ones. which mostly boil down to “the app is doing way to much, often unnecessary, work”, I think he is entitled to use this title.

        1. 1

          That may well be, but this specific article in isolation (which is what most of us are judging by) does not substantiate the claim. The author even acknowledges it in the first paragraph:

          I apologize for this title because there are many things that can make modern software slow. Blindly applying one explanation without a bit of investigation is the software equivalent of a cargo cult. That said, this post describes one example of why modern software can be painfully slow.

          1. 21

            I don’t understand the apology and think the title is fine. I read it like: “Why criminals are caught (part 38) – (the case of) the Hound of Baskervilles”. Where the parenthetical parts are optional. You wouldn’t think a blog post with that title would claim all criminals are caught because of the specific reason in that case.

      2. 11

        for this user.

        Are there users for whom waiting an extra 20 seconds before they can start using a program is acceptable?

        1. 7

          No, but there are likely users with fewer than 40,000 files in their Documents directory.

    17. 8

      I got almost to the end before I realized that the titles are links.

      1. 6

        I got to this comment before realizing it…

    18. 5

      My pairing partner and I was doing silent ping pong. Ping-pong style pair programming is when one programmer writes a test and passes the keyboard to the partner, who writes enough code to pass the test. He or she then writes a new test and passes control back to the first person. In the silent variety, you’re not allowed to talk.

      Seriously?

      I swear, the more I hear about “Agile”, the sillier it gets and the more I want to get out of this industry. All I know is that before “Agile” was being shoved down our throats by new management, my department never missed a deadline and only two bad deployments in a decade. Now? It’s been an unmitigated disaster and I’m surprised we even still have our customer (the Oligarchic Cell Phone Company).

      Okay, rant over.

      For testing, we basically give them a base name (for the overall feature) and a number. If a test fails, you can check the testcase itself to see how it should work, given that a testcase contains how the data should be configured and what is expected to be returned (I should note we’re using a custom testing tool because of the nature of our codebase).

      1. 4

        Not only that, but once we’ve established that “silent ping pong” is a thing the author thinks is a good idea, why on earth would I continue reading anything else they’ve got to say about programming?

        1. 3

          It’s an exercise. Not a way to work. They even said as much in the article you are commenting on, but I guess you skipped that part?

    19. 21

      I disagree with the response, assuming that the original question was using the original icon font (Font Awesome) as a supplement for their buttons. That is, buttons and other UI elements were composed of a combination of both icon and text. I think that is a likely interpretation given that they have hidden the icons from screen readers.

      In that situation, I think that it’s perfectly acceptable. The ambiguity is mitigated (if not entirely removed) by the text next to the icon. Even with the “risk” that the emojis look out of place compared to the rest of the website, I think it’s still fine. (I’m also of the opinion that websites should conform more to the client’s OS rather than fight against it. That websites should blend in with the rest of the native applications rather than look distinct.)

      1. 14

        I agree. The answer is responding to a strawman. The question wasn’t if replacing text with emojis was a bad idea, but rather if it was a good idea to replace icons with emojis.

        Secondly, the response is commenting that older devices or OS might not have the required support, but the question does specify that this is an internal app, so presumably they have control of what devices and what versions of OS the app will run on and can make a decision based on that.

        Thirdly, the answer is conflating bad design and emoji use. The question is asking if a button with an emoji, for example [✔ OK] would work well as an interface, yet the answer manages to present this as an example where that could be misinterpreted:

        often 👥 will misinterpret emojis that their peers 📦️➡️ to ➡️👥. ➡️👤 do ❌ 🙏 to have a sudden misunderstanding between 🆗 ➕ apparently also 🆗 emoji like this: 🙆;

        And finally, they seem to believe the emojis would be inserted in the middle of text strings instead of being consistently formatted as a pictogram for buttons or messages.

        I give the answer a massive 👎

    20. 1

      put the object to a special array is just another way to set feature flag. it is just a harder to find out feature flag. Add field to a object or add field outside the object but with pointer to it.

      1. 1

        What they seem to be describing is a vague approximation of the Entity Component System pattern. The benefit is that it moves the complexity of the entire program out of the object itself and into the systems that need to know that state. Why does an entity need to know it’s visible? Shouldn’t the system handle that for all entities with that component instead? There isn’t any one answer, but this kind of pattern might simplify your program a lot.

        1. 1

          the code checking for a particular flag should put into files close to each other ( put into the same system ), that is the essence.