Threads for migurski

    1. 126

      I’ve been finding GitHub’s slow migration to React pretty annoying too. Back in the pre-React days it used to be rock-solid - stuff Just Worked. I’m increasingly running into little weirdnesses now caused by the move to more of an SPA architecture.

      Just now I clicked a link to an issue and watched a on-page progress bar that seemed to get stuck… so I copied the URL into another tab and added /issues/1 and it loaded correctly.

      It’s also annoying to me personally because I used to enjoy using GitHub as an example of how you can build an extremely sophisticated web application with a great user experience without having to render everything client-side.

      I don’t agree with the “legacy” software framing though. The React bugs aren’t happening because Microsoft are losing interest in investing in the platform - it’s because they ARE investing in the platform, but in 2024 convincing any front-end development team NOT to go all SPA is almost impossible.

      <Simpsons old man shouting at clouds GIF here>

      1. 29

        To provide some anecdata in the other direction, I’ve had very similar behaviour to what you describe happen on GitHub for years, especially using the back button to jump back and forward between different pages, or when using shaky internet. Pages would regularly fail to load until I reloaded the page, and the URL was often out of sync with the page I was looking at. I believe this is some turbolinks-style Javascript that has been regularly (and unnecessarily) been breaking GitHub for years.

        Completely unscientifically, my impression is that these sorts of bugs have been going down recently, at least since some of the new React pages have been coming out. This may not be accurate, or it may be unrelated to the React changes, but I feel like the site has been working better recently.

        When you say GitHub has been your go-to example of a good web application without client-side rendering, it has ironically been my go-to example of a web application making things unnecessarily worse for the end-user by mixing and matching server-side and client-side rendered content. I agree, though, that switching entirely to React is probably not the choice I would have made in this case.

        1. 12

          I think SPA vs MPA isn’t the relevant distinction. Both can be done better or worse. The problem is that GitHub is now moving resources to where the money is (AI, Actions) and away from loss leaders (Git, the web UI), and the result is a drop in quality.

          1. 4

            I’m not sure I agree with that assessment. As I pointed out in my comment, I’m running into fewer bugs than I used to be while using the UI, so from that perspective it feels like quality is increasing.

            1. 8

              Most irritating new bug I have encountered is Microsoft Github treating attempts to select or scroll by dragging as attempts to edit, so an unwanted I-beam appears. Good grief. Is there a way to turn it off?

              1. 7

                I haven’t encountered this on Github but it’s a daily annoyance on Linear. Not everything needs to be GDocs-style editable rich text and the frontend engineers never get the keypress handling right so it messes with browser navigation.

            2. 3

              GitHub won’t open in Safari on my iPhone. It just crashes the page. I dunno, seems like something bad is happening quality-wise but YMMV.

            3. 1

              There’s a new bug where the notification indicator doesn’t doesn’t go away when you visit the target page.

              1. 1

                A visual glitch that started for me last week is when I merge a PR, for two seconds the “delete branch” button shows up before it disappears again because I have my repos set to auto-delete branches on merge. This didn’t used to happen. Until last week it would just say “deleted” as soon as I pressed merge.

        2. 4

          My experience in the past was similar, I always thought that Turbolinks or whatever was causing the issue. Moving to React may solve or worsen the issue in theory. Bfcache is a tricky beast and hydration is still not great, especially if you have a large application codebase. But it can be tamed and may cause better experience than slapping on Turbolinks.

      2. 2

        I think that the ‘click a link and watch progress bar forever’ has been a think for a while, I recall an article some time ago explaining why it happens. Unfortunately, I couldn’t find that article easily because “Why GitHub is slow” unsurprisingly turns up endless unrelated content.

    2. 1

      My most-used shell command is my alias lasttime, which expands to history | grep "!#:*" | tail -n 20 (tcsh syntax) — it passes multiple args to grep and I couldn’t live without it.

    3. 2

      It’s wild how much influence Bryan had on FB. I didn’t know his involvement in the Mercurial move detailed in this post, but I was familiar with his earlier work on Hip Hop for PHP. Both are great examples of soft power in engineering culture!

    4. 4

      I like the idea but it makes Perl look positively sane. It looks like line noise.

      1. 7

        You’re given a dictionary and asked to find the longest word which contains no more than 2 vowels. […]

        { ⊃ ⍵ ⊇⍨ ↑⍒≢¨ ⍵ /⍨ 2≥ (⊂"aeiou") (+/∊⍨)¨ ⍵ } io:read "dict.txt"
        

        At this point, comments suggesting the code is “unreadable” are bound to be thrown around, but once you learn a handful of symbols, this is in fact significantly more readable than the kinds of formulas I see on a regular basis in Excel.

        I have to admit I’m having difficulty imagining my myself learning that handful of symbols!

        1. 10

          Until someone asks me to stop, my Factor-translating compulsion continues . . .

          "dict.txt" utf8 file-lines
          [ >lower [ vowel? ] count 3 < ] filter
          [ length ] supremum-by
          
          1. 4

            a more direct translation would be something like (not tested)

            [ [ [ "aeiou" member?  ] map-sum ] map 3 < ] filter
            [ length ] sort-by last
            
            1. 2

              Thanks! I don’t know what any of the symbols mean in the original, and probably shouldn’t have called my version a “translation.”

              But your version looks like it tries to sum booleans?

              1. 3

                ah, summing booleans is how APL does it. guess i need a ? 1 0 ] map-sum instead.

          2. 4

            Huh, this actually looks a lot nicer than the original. Do you have any personal recommendations for Factor resources?

            1. 5

              Someone recently posted a write up here on lobsters for a small concatenative language that looks a lot like Factor, and it does a great job explaining basics, like working with a stack.

              I try to add good resources as posts and sidebar content to c/concatenative on lemmy, including:

              I did a handful of Advent of Code days with it, but those may not be very good…

              Currently I’m practicing with the Perl Weekly Challenge series.

              1. 3

                Wow, thanks so much!! This is super helpful, especially appreciate the link to c/concatenative and the Perl Weekly Challenges.

              2. 2

                those may not be very good…

                I think it’s going to be 10,000,000,000,000 years before I can write idiomatic Factor, and I’m trying to decide whether I care.

        2. 5

          To my surprise ChatGPT interpreted that code and figured out what it did! https://chat.openai.com/share/b3d0dd06-d373-430d-9fe1-6898d25d56a3

          (I gave it a link to the manual but it didn’t fetch it to read)

        3. 2

          Trick to make it seem easier: learn a language with a different writing system. :)

        4. 1

          Fortunately, the documentation is pretty solid and self-explanatory

      2. 6

        I like how uiua has both symbol and ascii names for each operator.

      3. 2

        This kind of comments is unhelpful and potentially damaging. There are billions of people who will find the sentence you wrote here look like line noise.

    5. 1

      To paraphrase Michael Pollan, “write code, not too much, mostly procedural” 

    6. 8
      • Registrar: Porkbun
      • Web & Mail: Pair Networks

      I’ve been on Pair for almost 25 years, including beginning in a time when I was founding my own web hosting companies and designing, implementing, and administrating the mail service for 100k people myself, for separation of concerns. They are nothing flashy, but they are solid and excellent and conservatively stable.

      1. 3

        +1, Pair has been great for me since 2001.

    7. 16

      No mention of single pixel spacer GIFs in the table layout section!

      1. 4

        and the single pixel tracker images…

    8. 3

      For a while at the last agency where I worked, we had a standard practice of maintaining per-client blogs where we shared the work as it developed. Invaluable when it was later time to dredge up a screenshot for a presentation.

    9. 4

      https://protomaps.com/ is SO cool: “an open source map of the world, deployable as a single static file on cloud storage”. Absolutely incredible piece of engineering.

      1. 2

        It’s really remarkable!

    10. 6

      This 15-years-ago talk by Leonard Richardson provides useful perspective:

      What happened? There are fewer big-name Internet protocols now than in 1993. Well, the Web ate those other protocols, just like the Internet ate Compuserve and BITNet and the UUCPNET and a bunch of other networks you’ve never heard of. It turns out most of the things we want to do on the Internet–have discussions, search databases, share files, look up things in directories–we can do over the web.

      https://www.crummy.com/writing/speaking/2008-QCon/

      The whole thing is excellent and a useful bit of Silicon Valley / Cold War history.

    11. 2

      Many of these arguments are true specifically in the context of public-facing web apps, but I found a few of the assertions hard to accept. For example:

      vanishingly few scaled sites feature long sessions and login-gated content.

      It’s not clear what he means by “scaled sites” or how long a session needs to be to count as a “long session,” so maybe this is just a question of mismatched definitions, but I’d suggest reality is the reverse of this: vanishingly few scaled sites don’t feature long sessions or some amount of login-gated content.

      My suspicion is that the author’s perspective is heavily skewed by the fact that he’s a consultant who’s brought in to help struggling projects. With that daily experience it’s easy to underestimate how many projects are doing just fine: they are invisible because they never need to call in a consultant to untangle their nonexistent mess.

      1. 6

        Sure being a consultant probably causes a degree of bias, but as a user SPAs largely suck as well. The most successful SPAs largely seem like things that should just be apps, but because of the insistence on them being webpages means that browsers get ever more enormous. And then because they aren’t apps these SPAs have fairly shitty UX as they try (and fail) to implement all the host OS conventions that would be automatically managed by the browser if the SPA wasn’t so busy trying to be an app.

        1. 2

          So you think that the web should remain just a simple text browsing system, and Wikipedia is the pinnacle of what’s possible on the web?

          I think the opposite. The text-browsing use case is not as important, and applications are more important. The browser became an application platform because that’s what people want. Saying that SPAs are bad for the user is a contradiction, because if they were, we wouldn’t have so much reason to build application technology into browsers.

          I don’t believe there’s a cabal of people purposefully trying to make technologies that are terrible for users as seems to be implied here. If that were the case, people would stop using the web. Yet, there are more web applications than ever, indicating that users’ needs are being met. When you talk to real people, you find that they don’t expect full-page refreshes and flickering all around the page after interacting with content. They expect smooth transitions and keeping page context so that the transition isn’t jarring. You know, like the real physical world.

          1. 2

            So you think that the web should remain just a simple text browsing system, and Wikipedia is the pinnacle of what’s possible on the web?

            I didn’t say anything like that?[1] I said SPAs are bad, and mostly make for bad UI, bad UX, and bad performance. These issues are almost entirely the result of SPAs by design undermining basic OS behaviors and user navigation mechanisms provided for free by the browser. The advantage for a company making an SPA is that by breaking these a user is forced to (1) always keep the full app open and (2) always go through a central funnel rather than being able to go directly only to what they needed. The other advantage is that you can make your low “good enough” app for windows where there are a lot of users, and then say you support other platforms because it’s in a browser: never mind that your app is essentially broken on other platforms, those users just need to learn how to use windows.

            [1] Just to clarify here: I spent more than a decade working on browsers, and was directly involved in, and in some cases responsible (to blame?) for, numerous specs you probably take for granted. I believe you can and should be able to do great things with web technology. I think SPAs however defeat the whole point of being in a browser, and results in an across the board worse experience than a properly created app in a real app environment. A browser is necessarily always going to restrict access to the host environment, that means the fidelity of an in app browser will always be worse than a native app. Given that why would you also throw away a bunch of the features that make the browser environment genuinely different, and in many respects better than regular native apps? Again, I know that the answer is it’s cheaper to do it this way, but it being cheaper doesn’t undo that the app itself is worse.

            1. 1

              I fundamentally believe that client-side state management is a really diverse and difficult design space, and I don’t want the browser owning it. I honestly love the simplicity of server-rendered apps, but that comes at a cost, which is the cost of control of client-side state management. That’s not a price I’m willing to pay.

              You share the belief of a lot of people, which is that SPAs are fundamentally bad, and the browser should have more control. None of the arguments are compelling to me, but it’s a valid opinion and I understand where it’s coming from to a degree. This article is just so condescending and self-righteous, and that’s what I don’t like. You can suggest new approaches without putting down others.

            2. 1

              Installation is one reason that isn’t just “it’s cheaper.” Installing an app is added friction at best. At worst, it’s a showstopper: maybe I don’t publish a binary for your particular operating system. If you’re on a locked-down corporate PC, using my native app may require a weeks-long process of arguing with IT about adding my app to the list of blessed software, and the outcome might be “request denied.” Using my web app, on the other hand, only requires the ability to connect to the public Internet from any reasonably modern web browser that has JavaScript enabled.

              1. 2

                Installation of an app is the key signal we can take that a user wants to let software have access to system information that you don’t want available to regular web content.

                The reason you can use a browser based app without “IT’s approval” is because of the restrictions placed on web content. So those restrictions cannot go away without breaking what actually makes the web interesting.

                But I just want to reinforce: I’m not complaining about webapps - I spent years working on browsers to support them - what I’m complaining about is how crappy the SPA experience is because it’s trying to behave like a native app, rather than accepting that it is not, and leveraging the advantages of the environment it operates under. So you break a bunch of basic functionality that web apps used to have (back, forward, direct linking, etc) while also not properly supporting text input, copy/paste, search, etc that a native app can do.

      2. 3

        Useful bit of context is that the author is not a consultant, he works for browser makers: previously Chrome team, currently MS Edge.

        1. 1

          Oh, I missed that this was by Alex. I have worked with him in the past, and honestly I would have just ignored the article entirely if I had.

          My experience is that he’s so closest I’ve been to having to work with a troll.

        2. 1

          Okay, fair point. I misinterpreted the first paragraph of the piece.

    12. 5

      Instead of whittling buildings from living trees as real builders once did, we are reduced to merely assembling purchased wood and bricks.

      1. 7

        But you can build arbitrary things out of wood and bricks.

        What I see with the cloud and modern frameworks is that you can start with a canned architecture, and get something that’s roughly similar to what you want kinda quickly.

        But then you spend 90% of your time patching over the last 10%, and you never really get what you want. Instead you pass it on to the next team, which rewrites it on some newer non-composable abstraction, from the same vendor, or a different one. The abstractions eat up any hardware improvements, so that the same program has higher latency than it did 5 years ago.

      2. 2

        But the wood you are purchasing isn’t 2x4s. It’s just tree branches that you need to combine together. They don’t fit quite right so you need to stick some mud in between them so the wind doesn’t cut through the building.

        1. 8

          I wouldn’t even call it raw tree branches or 2x4’s. Those can be eventually fashioned into the shape you want, with enough work (and, in software, work can be automated!).

          I would say the analogy is closer to trying to build a house out of Ikea furniture parts. Those parts are great, if what you want to build is what the designers intended! But if it’s not (and it’s often not), then you’re stuck with hacks. After the hacks, the system continues to works poorly.


          Steve Yegge has a couple good analogies that get at the composition problem:

          Java is like a variant of the game of Tetris in which none of the pieces can fill gaps created by the other pieces, so all you can do is pile them up endlessly.

          http://steve-yegge.blogspot.com/2007/12/codes-worst-enemy.html

          In the cloud, a common pattern I see is “adding caches” for things that shouldn’t be slow in the first place. The caches patch over some problems, and add more.

          They leave big holes in correctness and performance, which are obvious problems we should be aware of. There are also highly non-obvious problems like metastable states: https://www.usenix.org/publications/loginonline/metastable-failures-wild

          (i.e. the presence of a cache now means that restarting a stressed system does NOT cause it to recover. All cloud companies have complex software and processes to patch over this problem, implicitly or explicitly. On the thread about Twitter, I pointed out cloud systems have people turning cranks all day long, and SREs were (probably still are) the most numerous type of employee at Google)


          And Legos:

          With the right set (and number) of generically-shaped Lego pieces, you can build essentially any scene you want. At the “Downtown Disney” theme park at Disney World in Orlando, there’s a Legoland on the edge of the lagoon, and not only does it feature highly non-pathetic Lego houses and spaceships, there’s a gigantic Sea Serpent in the actual lake, head towering over you, made of something like 80 thousand generic lego blocks. It’s wonderful.

          Dumb people buy Lego sets for Camelot or Battlestars or whatever, because the sets have beautiful pictures on the front that scream: “Look what you can build!” These sets are sort of the Ikea equivalent of toys: you buy this rickety toy, and you have to put it together before you can play with it. They’ve substituted glossy fast results for real power, and they let you get away with having no imagination, or at least no innovative discipline.

          https://sites.google.com/site/steveyegge2/ancient-languages-perl?pli=1


          A major point of my posts on software architecture last year, which it took me many words to get around to, is that Unix is a language-oriented operating system, and that means it composes like a language.

          The cloud is not language-oriented, and doesn’t compose.

          And I think the “everything is text” problem is sort of a red herring (it’s a tradeoff/downside of the Unix style).

          I believe that we’re so focused on solving the I need types for fine-grained autocomplete problem that we’ve lost sight of the we’re writing way too much code that works poorly problem. One problem is local and immediate, while the other is global and systemic.

      3. 1

        We’ve been building from wood and bricks for the last few decades but these days we’re building prefabbed kitchens and bathrooms and jamming them together until the joists buckle and shipping that

    13. 4

      Building any machine that lasts 50 years takes some work. The durable tech artifacts like typewriters and guns and mechanical watches and so on that tend to last a long time with minimal maintenance are also precision machines that are massively over-engineered, basically because they have to be very tough to keep that precision. How many, say, chairs, bicycles and pocket calculators are still around from the 1970’s? A lot more than zero, sure, but not that many.

      Also, unlike bicycles and can openers, computers have a lot of components inside such as northbridge chips and drive controllers that are basically manufacturer-specific. If you want to replace a part in them 45 years from now then you either need a donor system or you need a manufacturer still producing those parts.

      I’d personally be really happy with a computer explicitly designed to last 10 years; there’s plenty out there that are this old, but mostly by accident.

      1. 13

        Many 1970s Schwinns are still in service, because they were sold with a lifetime warranty and thus designed to not need much servicing in spite of being mostly sold to young people who were not expected to treat them gently.

        As a tradeoff to meet their durability constraints at their price point, they were generally very heavy machines built with technology that was seen as outdated even when new— 50-70% heavier than many competing bicycles, with clunky 1950s derailleur and shifter designs that didn’t have a wide range of gears (which meant lower-precision and lower-maintenance complements worked just fine). Their frame was also mass-produced via a very unique technology, but it required huge capital expenditures and would have been very expensive to adapt to changing tastes: https://www.sheldonbrown.com/varsity.html

        They are still pleasant machines to ride in the right circumstances— flat terrain without a lot of starting & stopping, which is why I am content keeping a $75 1970s Schwinn as the bike I ride when I visit my parents in Wisconsin— but Schwinn nevertheless went bankrupt (and is now a Walmart brand) because customers wanted lighter, more capable machines and were willing to accept a bicycle less likely to last 50 years in exchange.

        1. 2

          Thanks for the story! This is a great example of the tradeoffs involved. (Now that I look up pictures, I think my mom had one of those bikes in the early 1990’s.)

          I’m curious, do you know what the market for parts is like? Have any companies cropped up making reasonable replacement parts, or are people just steadily cannibalizing old bikes? I’d guess the former, since it’s relatively easy to make moderate amounts of simple shapes out of steel, but…

          1. 3

            Bike parts are reasonably standard regardless of make & model especially on steel frames from US, English, and Japanese manufacturers. Velo Orange is one company that’s really marketed themselves as a maker of parts for old bikes, but they are the nice maker in a market that also includes free parts bins at a community bike kitchen.

            1. 1

              I do still ride a bike built on a frame from around 1980 with parts mostly from the 2010s as a winter training bike (after many years of service as a 4-season commuter). There are only a few major mechanical interfaces (bottom bracket, headset, seatpost, brake mounts) on a bike frame, and, as mentioned, they mostly became globally standardized by the ISO around 1980 for ‘normal bikes’, although there has been an explosion of proprietary parts over the past decade on a lot of high-end bikes for the sake of weight/aerodynamics/stiffness/etc.

              For a bike like a 1970s Schwinn, mostly built to older American standards, one often needs to dig through the parts bin at a community bike kitchen or hunt things down on eBay. But everything is very durable & rebuildable so replacement parts other than brake pads & chains are rarely necessary.

    14. 8

      I really dislike the cattle vs. pets analogy, because it reenforces a speciesist world view that sadly is very prevalent in almost all parts of the world.

      1. 17

        I like it because it refers to a perspective shared among those who see it. Even those who disagree, understand the meanings behind it.

        1. 10

          Ehhhhhh… I mostly agree in this instance, but this is unfortunately not a great line of argument to take in general. You can apply the argument to any kind of human discrimination in history, and it’s just as true. If you describe something as “X for white’s, not spic’s” people will generally understand the meanings behind it, but that doesn’t mean you couldn’t do better.

      2. 7

        You could have a pet cow.

      3. 6

        What would you rather use instead?

        1. 10

          Plants vs. crops, maybe?

          1. 7
            1. 7

              Calling my desktop a rose seems unfair to roses.

          2. 4

            I like that, maybe even garden vs farm?

          3. 2
        2. 3

          Reproducible and non-reproducible configurations. The key feature of cattle that we are trying to isolate is the ability to spawn new machines with a known-good configuration on a reliable basis; in other words, cattle are reproducible.

          1. 2

            I really like that term. It also doesn’t take size into account so much and it leaves the question how these goals are achieved wide open.

          2. 1

            I mean, the very point of this system is to make reproducibility easy…

        3. 1

          Clusters and individual machines?

      4. 5

        Other than it is a really bad analogy in general. In most situations it’s just “now your cluster is the pet”. It’s also bad because it somehow makes it sound that holding cattle is easier than a pet.

        On top of that it’s in the same line as tons of other lines meant to shut people up before arguing, so you don’t have to know what you are talking about.

        A similar one is that complaining about the term serverless is like complaining about horseless carts, when in reality it’s more like calling a cab “carless”.

        I really think those statements and analogies do the industry a huge disfavor and that they should be abandoned altogether. Not because analogies are always bad (they are pretty much always imprecise though), but because they don’t even serve the purpose of analogies which is explaining things well. We have good analogies in IT, that mostly work, from cryptographic keys, to files, directories (or folders if you are into that). What they have in common is that they explain something better than any technical terms. Cattle and pets don’t. They at best make bold claims about how things work, but tend to easily break no matter what direction you go. Think about protecting your cattle or your pet. How does analogy even work in terms of security, which is a big part. For files again, it works. Putting the file into the trash bin, and even retrieving it, emptying it, all works pretty well. Also protecting your file works well with analogies.

        I think the difference is that some of these “bad” analogies are being mostly used for marketing and like mentioned to bring points across when you lack good arguments.

      5. 4

        Backyard vegetable patch vs industrial farming.

    15. 4

      I’m responsible for approving cpu/ram/storage increase requests from developers and stories like these do kinda make me wonder if I should be as lenient as I am.

      I pretty much approve every request because what else am I going to do? Scour the source code of every app for inefficiencies? I did do that once: someone who wanted 200GB of RAM just so they could load a huge CSV file instead of streaming it from disk.

      Maybe it’s just a thing where trust can be built up or torn down over time.

      1. 3

        Asking from ignorance here: I’ve never worked somewhere where you had to request cpu/ram/storage. Instances or VMs, yes, but not asking to have some more RAM and having to say how much up front. How is that managed? You have processes killing containers that use more RAM than the developer asked for? Or more CPU? And… why? Is it a fixed-hardware environment (eg not in cloud) where it’s really hard to scale up or down?

        1. 3

          Yes it’s fixed-hardware (grant funded), shared amongst several different teams. My role is mainly to prevent the tragedy of commons, and a little bureaucratic speed bump is the best I could think of.

      2. 1

        Figure out how much the hardware will cost, figure out how much developer time will equal that cost, and force them to spend at least that much time profiling and optimizing their app before the request is approved?

      3. 1

        Problem in this case is that redis is (essentially) single threaded, so give it as many cpus as you might, if something is eating it at a high rate, you’ll need to solve the root cause.

      4. 1

        I know several people who worked at Yahoo in the 00s. To get new hardware you’d have to go to a hardware board, that included David Filo.

        He would grill you and then literally login to your servers to check utilization and if things didn’t look good enough he’d deny your request. In one case I was told he logged in and found miss-configured RAID controllers that wasted a bunch of resources. Request denied.

        I’m not suggesting you do this. But thought it was interesting.

        1. 5

          What an utterly bananas way for the cofounder to spend their time: hire people they don’t trust then micromanage their decision making.

      5. 1

        If you don’t look at the reason behind the request, then the process seems weird… If you don’t know what the app is doing, how can you decide if it should be approved?

        1. 2

          I mean, only investigating when something unusual is requested seems like a pretty reasonable heuristic.

    16. 1

      I wish the first half weren’t there. The piece is not off topic though because the second half brings some good ideas to consider when writing large social platforms

      1. 1

        I wish the first half weren’t there

        Why?

        1. 4

          It is a political message that probably manages to not alienate, but seems to take a clear side of a political argument, instead of simply giving a justification and moving on to smarter ways of doing identity.

          1. 16

            instead of simply giving a justification and moving on to smarter ways of doing identity.

            It is a justification. There are in fact many people in the world who would face physical violence or criminal prosecution or both for admitting openly to their sexual orientation or identity. This isn’t a “political message”, it’s a true and verifiable factual statement. And as a result, many such people feel an urgent need to avoid tying their “real identity” to anything having to do with their sexuality. Noting this also isn’t a “political message”, it’s a valid example of why “real name” policies have problems.

            Nor is any of this a “dogwhistle” – there’s no hidden meaning or coded message that only a particular in-group is expected to pick up on.

            The article appears to simply say what it means and mean what it says. There are, verifiably, many people for whom “real name” policies are a problem and for whom having their “real identity” “outed” in certain ways might expose them to anything from social to physical/legal punishment. The author simply seems to have picked an example with which they were familiar, and I’m not sure why that would be perceived as wrong or bad or inappropriate.

            1. 4

              Had the first half just been your first paragraph, I’d have no problems with it.

              1. 15

                I still don’t understand what the “problems with it” are.

                1. 2

                  I think based on responses and non-responses elsewhere the “problems with it” are clearly that considering LGBT people to be, you know, people. You, I, and multiple other people in this thread have provided multiple opportunities for @Vaelatern, @Hail_Spacecake, etc to explain what concept is political or otherwise a problem that is not “do all people get to be considered people” there isn’t really an alternate interpretation.

                  To that group of homophobes, transphobes, etc: Many of the most horrific crimes in history came about as a result of some group deciding that another group of people were not human beings that have just as much right to exist as everyone else. You cannot say that “do this group get to exist?” is a political question unless you do not think that they are people.

          2. 12

            Observing that tech often ignores politics (and the people those policies impact) exist is exactly what is forming this author’s opinions on the technology argument. The first half is the “[simple] justification”.

            1. 5

              The problem is that actually engaging with the specific claims made by this article requires making strong political claims, and lobsters has a moderation policy of banning discussion the moderators judge to be too political. This is done for several reasons, not the least of which is that actually having a meaningful discussion about a politicized issue of computer technology often makes people extremely angry.

              I agree with the author that it’s very important to build technologies that facilitate anonymous and pseuonymous online communication, and that the current landscape of large, privately-owned online communication platforms run by organizations that have an interest in enforcing a real-name policy is bad. In fact, I believe this more strongly than the author does - I think most of the ways in which he’s hedging against this point are invalid, and invalid for reasons that are strongly politicized in ways that lobsters moderators have historically banned discussion of. There are claims in this article that are flatly wrong, made for political reasons.

              One method that might help is prohibiting amplifying any account in algorithmic feeds (e.g. Twitter, TikTok) unless identity is verified. You can follow anyone, but you won’t ever see a “suggestion” from a non-verified account, preventing their amplification effect.

              What actually constitutes “helping” in the context of when and how to secure online (pseudo)anonymity is itself a political question. I don’t think a policy or technical solution that allows people to post (pseudo)anonymously but only allows posts by verified real-name humans to be amplified is a solution to any problem in this space I think is important. I might want to amplify specific posts from anonymous (or at least unverified) accounts myself, by sharing them with people i know or to anyone reading my content, and I would not want that sort of posting behavior to be curtailed by any platform requirement that anonymous accounts not be amplified.

              Another might be stronger “liveness” testing. Some proposals already exist to make this better than the current captcha system. This could also utilize the biometrics that modern devices provide.4

              I’m all for a better alternative to captchas if possible; but what I don’t want is to have to prove that I am a specific human being in order to read information on websites. If biometric-verification systems for gating access to websites were widespread, I would be extremely interested in finding ways to bypass them precisely because I don’t want the people running the system to be able to figure out what websites I’m reading.

            2. 3

              I could summarize it without the political dogwhistles it uses.

              1. 17

                Those aren’t dogwhistles (“use of coded or suggestive language … to garner support … without provoking opposition”) they’re just a normal, completely forthright, and explicit argument.

                1. -1

                  This is getting off topic. It’s a dogwhistle, and I wish the first half of this article were not there, because until I had read the second half, I wanted to flag this as outright political.

                  1. 12

                    I don’t think you understand what dogwhistle means. It really doesn’t apply in this situation. The author is clear about the groups and policies discussed. There’s no wink, wink here.

                    1. 1

                      Then perhaps it’s comments that provoke outright opposition? The language use is clearly on one side of the political aisle.

                  2. 8

                    Repeating yourself isn’t a useful form of argument or comment here. And that’s literally all the above comment does. You’ve said you think that it contains dogwhistles in an earlier comment. You’ve said you wished the first half the article were not there in an earlier comment. You’ve said you view it as political message in an earlier comment.

                    What you haven’t done is justify why you believe there are dog-whistles in it, despite that being what people are challenging you on. They quoted a definition, and said what they think it is instead - responding by repeating the claims without arguing why they are true isn’t a meaningful response. It comes across as just trying to badger the people disagreeing with you into not doing so, and I’ve flagged it as such. I’m leaving this comment primarily to explain why I’ve flagged it.

              2. 5

                I think you are misunderstanding what a “dog whistle” is

                which demonstrate why this would put marginalized people in physical danger, as I show below. Enforcing identity is great at reducing problems for you as long as you’re straight, white, male, and American. For others it’s not quite as clear cut. This has been extensively researched, and Jillian C. York maintains a list of a lot of that research.

                Is not a dog whistle. It stating clearly and explicitly the issue.

                A dog whistle is something like “urban youths” as a dog whistle for black people, “globalist” for jew, being “anti-family”, etc. The goal is to not use the known explicitly bad word so that way people can’t be quoted as saying something explicitly racist, anti-semitic, homophobic, etc. That’s what makes it a “dog whistle”: it has the message, but the message isn’t explicitly audible.

          3. 4

            Are you suggesting that identifying types of vulnerable people so you can protect them or at least not harm them is political? Are you saying that acknowledging the types of malefactors who commit violence against vulnerable people is political?

            1. 1

              Yes this is profoundly political.

              1. 4

                In that case, I’d say that creating a service that lets malefactor users harm vulnerable users is also political.

                1. 1

                  Yup - the question of what agents count as the malefactors, and what groups of people count as vulnerable in a meaningful way, are both very political. In the sense that different people will come to very different answers to them which will have mutually-incompatible implications about what software and software features should exist.

                  1. 2

                    This is a super easy question: which group of people is saying that another group of people are not actual people?

                    What you are saying is that “should some people be considered property rather than people?” is a political viewpoint, “should some group of people be allowed to exist?” is a political question, “can we exterminate all people in this group?” is a political question.

                    We need to be very clear here: it isn’t a “political” opinion if it denies the right of some group to exist, or have the same freedoms as others. If you say that the right of some group to exist is a political opinion, that means that you explicitly do not consider that group to be people.

              2. 2

                To be clear you are saying that you believe that the right of minorities to be free from abuse simply for existing is a political statement?

                You really need to be clearer in your comments here as you are making it sound like you think that “should lgbt people be considered people?” Is a political question.

          4. 1

            Where was the political content?

      2. 1

        Have you written a large social platform? If not, how do you know that the article’s advice is any good?

    17. 3

      I agree with the author that learning as much Linux/Unix administration as you can goes a long way. Even in serverless-land, if you understand OS and system concepts, you’ll know how to structure your “functions” properly and efficiently.

      1. 1

        Just curious, what would be an example of an OS/systems concept that would help in the writing serverless apis?

        1. 3

          AWS Lambda instances get re-used between invocations closely-spaced in time, so often your runtime will have some persistent junk lying around in its temporary storage. We recently encountered “disk full” errors as a resulting and are being more careful to clean up storage.

    18. 3

      Dan Milstein wrote well on this topic a while back in No Deadlines For You! Software Dev Without Estimates, Specs or Other Lies . In the preceding post of the series he points out that up-front estimates can only work in regular environments, which in software are only to be found in 4-8 hour chunk sizes and definitely not at the “project” level:

      Kahneman and other researchers have been able to identify situations where expert judgment doesn’t completely suck. As he says:

      “To know whether you can trust a particular intuitive judgment, there are two questions you should ask: Is the environment in which the judgment is made sufficiently regular to enable predictions from the available evidence? The answer is yes for diagnosticians, no for stock pickers. Do the professionals have an adequate opportunity to learn the cues and the regularities?”

      Two four-hour tasks tend to have a lot more in common than two six-month projects. You can expect to make hundreds of such estimates, in the course of a couple of years. You get very quick feedback about your accuracy

      His conclusion mirrors this one: manage overall risk, not time.

      A big topic missing from these discussions is failure generally. This post mentions failures only in the context of wrong estimates, but doesn’t talk about overall risk. A prior post here on σ-driven project management: when is the optimal time to give up? opened my eyes to the idea that giving up early is a good way to manage scope and time:

      σ is an inherent property of the type of risk you have in your project portfolio, and that different values for σ warrants very different types of project management. Low σ means low uncertainty and means we should almost always finish projects. High σ means high uncertainty — more like a research lab — and means large risks of a huge blowup, which also means we should abandon lots of projects. (emphasis mine)

      Engineers don’t like abandoning work until long past its estimated deadline, but we should grow to reach for it as a pre-emptive tool.

      1. 1

        Engineers don’t like abandoning work until long past its estimated deadline, but we should grow to reach for it as a pre-emptive tool.

        How do you do that in a way that makes business sense? If you’ve spent a year and the customer has spent boatloads of money, how do you avoid getting sued for incompetence? If you state up front that you might pull the plug, what sane customer is going to hire you?

        1. 1

          Depends on the business: not all engineers have paying service customers. I expect that most engineering service businesses do low-uncertainty work which according to this model should simply cause schedule and cost overruns.

    19. 14

      What an unfortunate logo.

      1. 19

        I think it’s nice.