Threads for veqq

    1.  

      This. Talk. Is. Incredible.

      1.  

        Read “Elements of Clojure” even if you have no interest in Clojure. It mostly handles such things in a similar style.

    2.  

      Off-topic, but - I’ve seen the phrasing “how X looks like” (rather than, as I would expect, “how X looks” or “what X looks like”) a lot more, recently. I’m curious if it’s a literal-translation of some other language’s idiomatic construction, or due to some other cause?

      1. 6

        I see that all the time, as someone from Sweden, where it’s the literal translation.

        Then again, I don’t get why English speakers sometimes use e.g. ”would of” over ”would’ve”, or ”try and” over ”try to”.

        1.  

          wouldn’t’ven’t

          I would attribute this to ignorance/bad education. My favourite example is that a lot of people say “I could care less” instead of “I couldn’t care less”, completely inverting the meaning in the process.

          Other examples are ‘peak somebody’s interest’ instead of ‘pique somebody’s interest’, ‘All the sudden’, ‘for all intensive purposes’ (instead of ‘for all intents and purposes’), etc.

          1.  

            It’s language change. It (always) happens, in every language, and it really doesn’t have anything to do with ignorance or bad education. Meaning inversion is common; see also contronyms like “nonplussed”, “wicked”, “oversight”, intensifying repeated negatives such as “don’t know nothing”, etc. See also skunked term.

        2.  

          why English speakers sometimes use e.g. ”would of” over ”would’ve”, or ”try and” over ”try to”.

          “‘ve” can be pronounced in a few ways, similar to unstressed “of” so less literate people mix them up. “Try and” is not a similar error, rather an older (and more limited) construction; many uses of “try to” can’t be replaced with it (“he tried and explained” vs. “he tried to explain”. When both work e.g. “tried and fought” vs. “tried to fight”, there is a stark difference in meaning though hard to explain (directionally like orka, with a meaning like “he fought as hard as he could”) See: https://en.wiktionary.org/wiki/try#Usage_notes and for a historical view: https://www.merriam-webster.com/grammar/were-going-to-explain-the-deal-with-try-and-and-try-to

      2.  

        In addition to literal translation of a construction from some languages, it might be an intuitive attempt at a semantic distinction «look (without like) → perceive via vision, look like → have appearance».

      3. 11

        “Attacks”? Oh,my - Sorry if I offended you. I just have a different opinion on what makes a good Scheme standard, don’t take it personal.

      4. 6

        You probably should have put your hat on to make that comment.

        I didn’t read this as an attack—just commentary on whether or not standardization is “good” or not. But, I’ve not been following super closely the commentary on R7RS. Certainly, I suspect it’s not nearly as horrible as the R5RS-> R6RS shit show?

        1. 3

          I suspect it’s not nearly as horrible as the R5RS-> R6RS shit show?

          Not at all. There are actually some pretty good SRFIs coming out of the “large” standard. But it really is large. Nay, huge. I also don’t really see why it has to be standardized already. It might’ve been better write some SRFIs and then let the community decide which of them are any good by having them implemented and used in the real world for a few years, and then boil it down to a manageable standard.

          1. 7

            It might’ve been better write some SRFIs and then let the community decide which of them are any good by having them implemented and used in the real world for a few years, and then boil it down to a manageable standard.

            When I became chair, the first thing I did was to massively cut down the scope of proposals under consideration. From around 140 proposed features on John Cowan’s agenda which didn’t already have SRFIs, I reduced it to less than 40. (Even then, the only reason it remained as many as that is because I originally proposed an even more radical cutback to the WG, but if any single person spoke up for any particular proposal, I restored it to consideration.)

            In practice, even many of those proposals won’t make it in – some are mutually exclusive with one another by definition, others are just going to turn out not to be suitable as we try to work them into more solid proposals through the SRFI process. (I really wanted restarts a la CL, but the conclusion of the process that led to SRFI 255 was, at least for me personally, that if a RnRS were to incorporate them, we’d need a lot more experience with them in a Scheme context first. I, personally, won’t be pushing for SRFI 255 or 249 in the standard any more, and I think Wolfgang has much the same view now he’s worked the idea into a nearly-final SRFI.)

            Things that will be in R7RS Large fall into a small number of categories:

            • Existing, popular SRFIs, usually with some modifications to improve the consistency and coherency of the final language. This means things like SRFI 1 (Olin’s list library), SRFI 133 (Riastradh’s vector library, modified for R7RS small compatibility) – but, e.g., both of those will be further modified at least because there’s a silly inconsistency between them which should really be resolved so the final R7RS Large language is consistent with itself.
            • Relatedly, refined/revised versions of some R6RS features which fix problems with some of R6RS’s less successful experiments.
            • Finding standard APIs for things which many Scheme implementations have already provided in some form for years, but incompatibly with one another. Delimited continuations fall in this category, for example.
            • Things that nearly every other Lisp dialect and/or functional programming language has in its core library, which can be implemented as a portable library in terms of core language features for R7RS Large. Persistent mappings fall in this category. Restarts would have fallen in this category as well, but as mentioned, we ended up not being certain they would work very well in Scheme.

            Also bear in mind that core language features in particular are subject to a rule which requires at least three popular Scheme implementations to have independently provided the feature integrated into their own cores. There are still a couple of proposals open which still don’t quite meet this criteria, but I’m going to get a lot more strict about it, and start cutting out features which still don’t meet it as ratification approaches in a couple of years.

            It seems no matter what we do to reduce the scope of R7RS Large, and to ensure that only battle-tested features and libraries make it in, we keep getting this accusation thrown at us. It’s extremely tiring, especially coming from a community like Chicken’s whose interactions with the WG have been consistently hostile and toxic and which has actively resisted engaging with us.

            (I have deleted the comment at the top of this thread, which it was unfair of me to post publically even in a personal capacity, and sent Felix a private email explaining my frustrations.)

            1. 8

              It seems no matter what we do to reduce the scope of R7RS Large, and to ensure that only battle-tested features > and libraries make it in, we keep getting this accusation thrown at us. It’s extremely tiring, especially coming from a > community like Chicken’s whose interactions with the WG have been consistently hostile and toxic and which has > actively resisted engaging with us.

              I think “hostile and toxic” are the wrong words here. Negative feedback would perhaps be the better term. Your reaction indicates that this is a very personal issue for you and I want to point out that any judgement over the current state or the general direction of R7RS-large was never intended to offend you personally. Perhaps it would help if you take a step back and try to see this as the pure matter of language design that it is. What you are trying to achieve is hard, so hard that I would never even try. That’s why I prefer implementing languages, not designing them. R4RS and R5RS were really excellent examples of a language standard. R6RS was a disaster and R7RS-small has it warts but is small enough to still catch the spirit of Scheme while adding a few long needed features. But I fear R7RS large repeats the same mistakes that R6RS made. And as an implementor I have to live with the decisions the standards committee makes, have to make it efficient and practically usable and explain to users why things are the way they are (or not). Try to see that side of it too, if you can.

              A final word of advice: as the chair of a language standard committee you will have to learn to live with criticism. And I don’t envy you, as it is impossible to make everybody happy.

              1. 2

                Note for the public record: I replied to this comment, but got an email from Felix in reply to my above-mentioned email shortly after doing so. I think his email makes my original reply to that comment obsolete, so I have deleted it.

            2. 2

              When I became chair, the first thing I did was to massively cut down the scope of proposals under consideration. From around 140 proposed features on John Cowan’s agenda which didn’t already have SRFIs, I reduced it to less than 40. (Even then, the only reason it remained as many as that is because I originally proposed an even more radical cutback to the WG, but if any single person spoke up for any particular proposal, I restored it to consideration.)

              FWIW, I wasn’t aware of that. That’s good news, at least.

              I have a hard time keeping track of the developments. I’m subscribed to srfi-announce and srfi-discuss. Are there other low-volume places where “general” announcements are made?

              1. 2

                The Scheme-Reports mailing list: https://scheme-reports.simplelists.com

                1. 1

                  hm, I forgot that one, but I’m also subscribed to it, it seems. I only see one post in the past half year about the macrological fascicle draft.

            3. 1

              Do you think R7RS will have whatever is required so that I could write a project in R7RS, also using some libraries that claim R7RS support, and have it compile/run correctly on all Lisps with R7RS support?

              I remember this being something that was curiously absent from other scheme standards (maybe a module system was missing? I forget the details).

              1. 3

                It depends on what your project needs to do – operating system interaction will likely remain too underspecified for more advanced purposes – but in general, yes, that’s the goal. (I expect someone will come along and provide a quasi-standard POSIX library that works on a bunch of implementations, but I’m not willing to commit the standard that hard to one OS.)

                FWIW, a library system was added by R6RS, and a slightly different one by R7RS small. R7RS large will include them both. You can already do quite a lot with these.

                1. 1
                2. 1

                  What’s the reason for offering both library systems?

                  1. 1

                    So that an R7RS Large program or library can directly depend on R6RS libraries, provided the (rnrs (6)) libraries are installed on the system, as well on R7RS small libraries.

            4. 1

              Thanks for the interesting summary!

          2. 3

            I do think that there should be more consensus around SRFIs and an effort (and, of course, I’m not volunteering, so why should anyone else?) to revise the SRFI process such that it becomes “more standard” with less overlapping libraries, etc.. It’d be great if SRFI implementations were in a package repository — I’m thinking like Golang’s https://pkg.go.dev, etc. Some amount of compatibility matrix for each active Scheme implementation, etc.

            Small standard, quality, widely usable SRFIs, would be a much more palatable position to be in.

            (Of course, SRFI consensus would be rather hard — there’s have to be a laid out process, etc..)

            1. 1

              It’d be great if SRFI implementations were in a package repository — I’m thinking like Golang’s https://pkg.go.dev, etc.

              There’s snow, which attempts to be that. AFAIK accu and Chibi have a way to use these libs, and with snow2-client you can install these packages in other Schemes as well. It could be worthwhile making this a bit more first-class in CHICKEN 6 via chicken-install. OTOH, it might also be a bit confusing to users having two places to get libraries from…

              1. 1

                Rght. Package repositories exist. My point is if you want to make SRFI the “-large”, it should be obvious how to get the SRFI implementation for your scheme. Maybe that just means that Snow-client is a SRFI and there’s a SRFI fort.

    3. 2

      (n00b CS question)

      Is there anything interesting to be done by treating the binary embedding as a Very Big Number? E.g. take the 1024-dimension array [0, 1, 1, 0, 1, ...] and “join” it so that it becomes a 1024-bit unsigned integer. I’ve also wondered the same thing about Conway’s Game of Life.

      1. 6

        if it helps you think about this, there’s an old joke in ml that goess something like this:

        q: how many parameters do you need to, say, capture all the modern power of an LLM?

        a: 1; you can store infinite amounts of information in a single rational*.

        *of course the same is also true of integers; but most people don’t consider operations on integers to be differentiable :)

        1. 3

          LoL, the actual amount of truth in that answer is hilarious. It’s not just true in theory, we actually use it in practice with arithmetic coding: literally encoding an arbitrary amount of data as a single stupidly precise rational number.

      2. 3

        You can run binary or / and / etc ops on that unsigned integer and get fast similarity-style query answers, but that’s not really taking advantage of its integer nature, more the fact that integers are stored in the right binary format for that kind of thing to be convenient.

      3. 2

        It’s not an especially useful mapping, because it assigns vastly greater weight to the earlier coordinates, but that difference isn’t meaningful, as the order of coordinates in an embedding is arbitrary.

        Another way of putting it is that it reduces the embedding to one dimension, but the power of embeddings comes from their very high-dimensional nature.

      4. 2

        In SICP (2.1 I think) there’s an exercise where you encode different pieces of data like 2^x+3^y+5^z etc. making a giant number, which you can then break back down into the relevant information. Phase shifts change a problem or notation to make certain operations easier (e.g. log vs. exponents vs. fractions). Which operations could be cheaper here?

    4. 13

      Newsboat

      There’s a shocking number of paid aggregators which do everything online. I don’t understand the use cases. I’m perfectly fine with newsboat (though it needs a few small features namely a way to view/edit feeds within newsboat, though an alias works fine) but the devs are quite responsive to bug issues, so maybe I’ll try requesting such a feature!

      Newsboat updates when you want and persists everything, so you can read offline etc. What else do you need?

      1. 5

        What else do you need?

        If I understand the original request correctly: maintaining state across several browser sessions and/or devices. e.g. if you read today’s XKCD on your laptop, it’s marked read on your phone and desktop too.

        1. 3

          Yeah, RSS is my version of scrolling social media so I read it 50/50 in my phone and PC.

      2. 3

        I’m on the newboat boat too.

        a way to edit feeds within newsboat

        So, like an internal editor and not just SHIFT+E to open the URL file in $EDITOR?

        1. 3

          Precisely, thank you for sharing! I now have no complaints about Newsboat.

      3. 1

        I’m also a newsboat user (from back when it was newsbeuter) but I’m considering a UI/browser based one just because there’s so much visual content.

    5. 19

      I’m an XMPP fan because, warts and all, it’s an open standard with multiple implementations and it’s super lightweight. I use Dino on desktop and Monacles on my phone, and I run a prosody server off a VPS. The server has never gone down, ever. It’s a small deployment, though. Bridged to Slack with a few friends. I ran matrix before, but it was a tad slow for my use case.

      All the protocol criticisms are mostly accurate. And the majority of clients still lack features I miss. But honestly I’m about to give my Mom (a 70 year old woman) a login to my server, because she and I chat on Slack all day, and I think she’d be able to make the jump, and I want to own those conversations for posterity <3

      Many are familiar with the blow that big tech gave XMPP when they stopped federating. I’m sure they had their reasons. But I also have a half-baked sociological theory: after 2010, hatred of XML was on the rise. Especially XML network protocols. Can you imagine anything less sexy?

      And I do think XML is a problem for XMPP long term. A couple of months ago I had some idle daydream about working on a CapnProto->XML translation library. https://github.com/capnproto/capnproto/discussions/2137 Not because I think CapnProto should become XMPP 2.0, but because I thought maybe if there were a bunch of canonical protocol translation libraries it would help client implementers write bridges or storage implementations or whatever. I thought about it for an afternoon then went and did other stuff (ADHD, fwiw).

      1. 15

        But I also have a half-baked sociological theory…

        As a lover of half-baked sociological theories, I think that it’s a much simpler story of embrace-extend-extinguish. :-P Once Google Chat got enough of a user base, they didn’t need to federate with anyone anymore, and then once everyone was using GMail anyway, Google Chat didn’t need to exist to draw in new users anymore. Simple as that. Android phones provided a better avenue for a captive audience anyway.

        1. 12

          @icefox’s Tenth Law: Never attribute to anything else what can be explained by embrace-extend-extinguish.

          It gets confirmed time and time again.

        2. 6

          Federation was never a user acquisition strategy for Google. They had most of the users already. It’s just that Google back then believed in sometimes doing the right thing because it was right. Leading the way. But then they moved into their kill every project after 6 months era and it die just like almost every other google service.

          Overall it was positive for the network. I don’t think most of my family would be users today if they hadn’t got a taste with Google at the time. Would have been nice if it stuck around, but no ill will from me

          1. 9

            They had most of the users already

            For anyone who doubts this: Google integrated Google Talk (XMPP) with GMail. Anyone who had a GMail account automatically had a Google Talk account. Only a small fraction ever used it, but GMail was on well over a hundred million active users and XMPP was on around five million. And GMail was rapidly growing.

          2. 1

            Honestly, extremely good point.

      2. 9

        And I do think XML is a problem for XMPP long term.

        Does this really matter?

        Today the web works by passing unbelievable amount of unnecessary data, yet no one bats an eye.

        There are web services that can be replaced by simple strcat(). Is XML really some kind of a problem, especially when it can be simply compressed? (a lot of binary formats are really zipped XMLs, for example MSWord’s documents, Excel spreadsheets, 3d models in various formats, etc)

        1. 2

          Might just be referring to how XML is a verbose and poorly-specified standard and usually not as portable between parsers as other markup standards.

          1. 1

            Maybe — the XML used by XMPP is not exactly completely standard or proper usage itself. See also @david_chisnall’s comment downthread a bit.

            1. 5

              Haven’t scrolled down that far, but I’m assuming you’re referring to how it was plainly designed by somebody looking at the XML spec, then at the SAX parser spec, and then saying out loud “I have a cunning plan, sir”.

            2. 1

              Different applications also supported different feature sets. Keeping up with ever newer supersets also stretched many players’ resources too much: https://alexalejandre.com/programming/xmpp-open-messaging-standard/

              A more rigid core standard, actively keeping up with new technology may have delivered XMPP’s federated dream. In reality, each sever supported a different feature set, while lacking common features every other chat app offered. As mobile technology transformed, XMPP stayed still. Pidgin worked on Google Talk, before an XEP brought (necessary) OIDC authentification and those 3rd-party clients couldn’t keep up.

      3. 3

        And the majority of clients still lack features I miss

        Could you list a few of your highest priority ones for those of us working to improve this situation?

        1.  

          Hey, I actually had to meditate on this a while.

          One thing I don’t know how to do is edit messages further back from my most recent message. This is with the full understanding that I can’t expect those edits to propagate to all federated clients. However, I’m wondering if there’s support for this, even in principle.

          The other concern is more vague and is really about a consistent UX around rich text authoring and rendering. It’s just not as nice to try and format multliple lines, bullet points, or code blocks in the clients I’m using.

          I’ve never tried Snikket, fwiw. Also, I’m actually happy with my XMPP experience.

          1.  

            Yes, recent releases of Cheogram Android for example support editing any of your messages as far back as you like.

            So you want a rich text UI with eg a formatting toolbar, not just some interpretation of * as bold etc. do you think that makes sense on mobile too or more of a desktop UX in your opinion?

      4. 2

        she and I chat on Slack all day

        Mattermost might be worth a look then (admittedly another service to run) because it’s basically self-hostable Slack. Has a decent user experience and the (iOS at least) mobile client is pretty good.

    6. 2

      When you finish the survey, mousing over the star makes it close its eyes in a cute way. Great touch!

    7. 8

      In Russia, Pascal is still the main introductory language used e.g. in highschools! Many companies in the former USSR have some production systems in it, alongside Java, Go and C#.

      1. 1

        Curious about the reasons for Go being on this list. It’s my preferred language nowadays, but not very appealing in general for consultancy companies

        1. 2

          Why is Go not appealing for consultancy companies?

          1. 1

            Maybe it’s a regional thing, but usually I see more “corporate” languages, like Java and C#, being preferred by consultancy companies.

      2. 1

        They have also developed https://pascalabc.net/en/ for teaching purposes. The language is still good old Pascal, but it borrows many features from C#, including automatic garbage collection and the ability to declare variables anywhere, not necessarily in the dedicated var section.

    8. 19

      This feels like that time Paul Graham said over 25% of ViaWeb code was macros and it was intended to be a boast but everyone who saw it was just horrified.

      1. 3

        Found a source.

        It was apparently Lisp macros, which is at least a little less cursed than pre-processor ones (AFAIU Lisp).

        1. 1

          Eh, you can still do very cursed things with reader macros, which allow you to write non-sexp language features.

        1. 4

          I’ll let you read that again:

          over 25% of ViaWeb code was macros

          1. 5

            I don’t see the problem. He made a dsl in lisp, so the actual code was smaller/denser describing the problem space well. In this particular case, the users would basically be creating configs, similar to knowledge bases or constraint lists describing what they wanted. The macro heavy code would then transform that into an end result. Isn’t that good?

            1. 9

              Graham has been riding on his “we built an ecommerce site in Lisp and it was so awesome” for nearly 30 decades years now. Sadly the followup was Hackernews.

              1. 5

                I’m not sure if you’ve ever had the chance to use Yahoo! Store (what Viaweb became)—it was terrible—but it was also extremely innovative for the time. It used an actual programming language RTML you edited with the browser. It was sandboxed, and built on continuations (for which Graham received a patent).

                So, yeah. Maybe he’s been bragging about this for 30 years, but this success ultimately paved the way for YC which changed the world (I won’t make a value judgement on which way).

                1. 8

                  I’d posit that Graham, like a lot of other people, was lucky to be in the right place at the right time. Sure he and his team worked hard - creating a company isn’t easy - but lots of people made a lot of money in the first internet bubble, and the smarter ones kept the money and reinvested it. Running a VC firm was essentially a license to print money for a while. And YC itself “just” pics companies to invest in, they don’t run the companies themselves.

                  In other words, while PG attributes a lot of his success to the use of Lisp for Viaweb, it was probably not a game changer. The real product was making a web storefront, making it a success , and selling it to idiots investors with more money.

                  1. 4

                    I think Graham himself attributes using Lisp as helping to get to the finish line first over the competitors. And it’s a fair point as it appears to have worked for him. But there is little doubt that user editable online stores would have appeared otherwise and I don’t think he was ever alluding to that.

                  2. 2

                    While every exit has some luck involved and some “right place right time,” there likely weren’t other storefront builders in the same way Viaweb existed. That’s something.

                    Additionally, the choice of Lisp was definitely part familiarity, but also, he’s told stories as to why it was advantageous. He wasn’t skilled in running software, so their deployment, or bug fixes, or reporting were “run this thing in the repl.” You can’t do that outside of Lisp—maybe Erlang.

                    As for YC, it’s never been a traditional VC, and it’s only scaled up as a result of iteration on the model. The first group was like 6 companies. They all lived in Boaton. They all had dinner every Sunday (or whatever), they all got introduced to other investors, etc. YC built a community of like minded startup founders, and they lnow each other, and when possible, help each other. Does that happen in traditional VC? Probably to some degree. YC took it further, and it should be considered an innovation.

                    Note: I am not a Paul Graham Stan. I do think the man wasn’t “just lucky,” and actually created things that people wanted.

              2. 3

                Either 3 decades or 30 years, not 30 decades. 30 decades ago was 1724 ;)

                1. 7

                  You’re correct, it only feels like he’s been banging the drum for 300 years…

                  1. 4

                    I can happily claim that I have never read any of is essays or whatever. I think I made the right choice.

              3. 1

                In terms of how the software works, hacker news is one of the best sites I’ve used … like ever

                I mean lobst.ers was directly inspired by it, and judging by my logs hacker news drives 5x to 10x the traffic

                So oddly enough the Lisp way of writing web apps actually did and does work. I think they had to do a bunch of work on the runtime too, but so did stack overflow, instagram, etc

            2. 3

              Well, I think the point here is largely tongue in cheek, but there’s truth in it too, so I’ll focus on the truth; Yes, building layers is good, declarative code is good, interpreting specifications is good. But none of these things necessitate excessive amounts of metaprogramming.

              My personal experience is that I dig myself into metaprogramming holes when I spend too little time thinking about simpler ways to achieve my goals, so I’ve developed a reflex of stopping myself and reflecting a little whenever the urge to metaprogram comes. So, naturally, when somebody says their codebase is 25% metaprogramming, the same reflex kicks in.

              1. 4

                A good deal of standard Common Lisp language constructs are actually macros and nobody flinches using them. The language integration for that is really good. So I agree with u/veqq, it’s a nothingburger in general, although naturally a question of taste here applies just as much as to programming in general.

              2. 4

                Metaprogramming is as normal as defining functions in lisp. In Common Lisp core things like when, and, loop, dotimes, let, defun, defvar, defmacro or cond are macros. Code is data is code. Perhaps you are thinking of metapogramming in other languages where it has its own strange system and syntax and complects the system.

                Honestly, thinking about it, 25% must mean new macro definitions as typical code probably involves more macros (just from using control flow etc.).

    9. 46

      There is already a widely used protocol called RTP, so the name is far from ideal.

      1. 1

        weird that it doesn’t have a registered url scheme or service name

        1. 22

          RTP doesn’t use URLs or a fixed port number.

        2. 12

          That’s because it’s mostly used as a building block for other things. For example, SIP or Jingle (over XMPP) will handle session initiation for RTP streams, so the equivalent of a URI is in the session-initiation protocol and there is no well-known port because it’s dynamically negotiated by that protocol. Similarly, RTP is used in WebRTC where the addresses are typically the HTTP endpoints of the control programs and the port is again dynamically assigned.

          RTP isn’t quite as widely used as HTTP, but it’s incredibly common and, among other things, is supported by all mainstream web browsers. It’s not exactly niche.

        3. 12

          Because it’s not that type of protocol, it’s a generic transport protocol like TCP.

      2. 1

        Don’t forgot the other RTP protocol (real-time payments.)

    10. 2

      I like the distinction they make between programming languages and programming systems.

      One place I’ve noticed this is database management systems (DBMSs). I used think of Postgres as an SQL engine (language-oriented) until I started learning about all the administrative features it offered. For example, you can write a query that returns all of the tables in the database. It’s more a database computing environment; a programming system.

      1. 5

        Yeah.

        It occurred to me recently that a traditional SQL DBMS has more in common with image-based programming systems like Lisp and Smalltalk than with most other common programming systems. In an image-based system you are programming by mutating a working system in place – I suppose other more recent examples are Erlang / OTP and Jupyter notebooks. But more usually we treat programming in batch style, where the working system is compiled from source and redeployed more or less from scratch. And I think a lot of the dislike for pushing logic into the database is due to the mismatch in style between image-based and batch-oriented deployment models.

      2. 2

        Extending this, you can have Postgres handle your rate limiting logic, communication between services and a lot of other such plumbing. It’s really quite magical.

    11. 3

      There is also a 2017 paper: https://cs.utah.edu/~blg/resources/type-tailoring.pdf which I do not understand (terse notation without clarification) and a blog post by one of the authors sketching out the paper: https://lambdaland.org/posts/2024-07-15_type_tailoring/

    12. 8

      It’s not quite the same but the footnote reminded me of Forth, where I first heard of threaded interpreters in the 1980’s, and this is an older reference (1973) https://dl.acm.org/doi/pdf/10.1145/362248.362270 to threading that Feeley and Lapalme might have been aware of.

      1. 2

        Very nice find!

      2. 1

        “In software it is realized as interpretive code not needing an interpreter.” that sounds bold!

    13. 15

      I’ve only played with it briefly, but I like that Unison is innovating “around” the programming language, i.e. in tooling, deployment, sharing, and so on. It’s a more holistic view of software than I’m used to.

      Does anyone have significant experience with Unison that can comment on how it’s working out for them?

      1. 5

        https://archive.org/details/plan9designintro begins by describing all of this as the operating system, tools which help you compute better/easier. While it focuses more on OS, it is a very interesting lens which inspires further innovations and encourages thinking about overall workflows.

        edit: I just remembered internet archive’s down. The book is Introduction to Operating Systems Abstractions Using Plan 9 from Bell Labs. Here is a different link: http://doc.cat-v.org/plan_9/9.intro.pdf

        1. 3

          What does Plan 9 have to do with Unison?

          1. 2

            Nothing. As I wrote

            by describing all of this as the operating system, tools which help you compute better/easier

            i.e.

            innovating “around” the programming language, i.e. in tooling, deployment, sharing, and so on

            the introduction to the book presents a rather fresh perspective on computing. Plan9 has nothing to do with this, besides being the medium the author uses after the introduction to explore these ideas, which Unison is exploring in a different way.

            1. 2

              Interesting parallel. Would not have occurred to me!

    14. 8

      There is an underlying assumption here that you might get one or two high performers but they’re going to leave and managing software development is more like managing a McDonald’s where you turn over fungible workers regularly.

      Imagine starting from the other side with a skilled group of programmers and removing causes of them leaving (largely bad managers and management, and bureaucratic toil). Then this calculus starts looking very different.

      1. 2

        Even then, your heroic programmer could get hit by a bus.

        But, how does one start with a skilled group of programmers? You mean to start a business, and go compete in the marketplace? That does happen, but programming skill and business skill (and motivation) don’t coincide or even align as often as we’d like. Moreover, in highly consolidated markets the incumbent giants are hard to compete with. The golden exit strategy for most startups still remains being swallowed by a giant, which puts one back in the area of this underlying assumption that you dislike.

        1. 2

          how does one start with a skilled group of programmers

          Start with an uncommon language, Clojure, Haskell, Common Lisp, Elixir or such (is Uiua up to it?). You’ll probably have a good friend group of highly competent developers by spending time in a hipster language, but you’ll also get overqualified people applying who just want to work with that language!

    15. 1

      this is not in the pdf, but i think this might be an appropriate place to ask because it’s about lisp style, why is the closing bracket usually on the same line in lisp? i tried using guix as a second package manager and it was the weirdest thing about it for me, e.g.

      (define (english-plural n str)
        (if (= n 1)
          str
          (concat str "s"))) ; i have to count them! and if i want to add something i have to edit this line too
      ; instead of
      (define (english-plural n str)
        (if (= n 1)
          str
          (concat str "s")
        ) ; visiblle where what ends
        ; i can add something here without touching other lines to it's easier and git diffs are cleaner
      ) ; so it seems to me to be more convenient, so why not?
      
      1. 2

        Your editor will make closing parentheses when you make the opening, but consider that we don’t even type the opening parentheses (and barely see them). Use Paredit! https://calva.io/paredit/ or http://danmidwood.com/content/2014/11/21/animated-paredit.html It lets you directly move across levels on the AST. Learning the more useful hotkeys costs perhaps 1-2 hours (but is a huge indicator of success with lisp and helps you get more out of it.)

        Aesthetically, a nice final ))))))) saves space, which I like. I always preferred this style in other languages, so was happy and never thought about it in Lisp. (Pascal’s end is cool, though.)

        Structures/operations (tend to) happen at the beginning of a line and have a specific shape. Where they end is indicated (to us humans) by indentation) which the IDE largely handles.

        I don’t think it’s so relevant for Guix, though (since you won’t use so much of it.)

    16. 2

      I am reading through the 5-part blog posts of Eli Bendersky related to the Raft distributed consensus algorithm.

      I like his writing style, it’s very understandable.

      1. 2

        https://visual.ofcoder.com/raft/ is a lovely saunter (no code, just the algorithm/processes)

        1. 1

          This is a very nice graphical representation. Thank you very much! PS: I didn’t come that far with the 5-part series, still 3 to go :)

    17. 4

      Some good comments from other discussions:

      Sign and date your comments!

      Our software vendor uses VCS, but has a long history of signing and dating code sections with paragraph explanations. They’ve been developing the same codebase since the 70’s and release the uncompiled source along with the precompiled binaries. Their documentation is horrendous, but the source and comments help out quite a bit.

      I recently configured my editor to expand “todo”/ into “TODO | – $name, $date” (where | is where the caret ends after expansion).

      and

      One small point of disagreement I have with this document is on p. 13: “and and or for boolean value only”. Interestingly, they don’t show an example of what to write instead of non-boolean or, and I think that’s perhaps because it’s a bit involved. The naive expansion of (or A B) would be (if A A B). But A might be a large or expensive subexpression, in which case you should write something like (let ((a A)) (if a a B)) — or, if your style guide insists you should avoid nil punning altogether, (let ((a A)) (if (null a) B a)).

      This is a common enough idiom to deserve an abstraction. I suppose you could define another macro for it — maybe call it otherwise — and reserve or for booleans, as Norvig and Pitman recommend. But or does the same job, and at least in some subcommunities, there is already a long tradition of using or for this purpose.

      The argument in the case of and is weaker. As the document shows, instead of (and A B) you can write (if A B nil). I think the former is easier to read than the latter, but this is more a matter of taste and what one is accustomed to.

      p. 39: Of course, ASDF has won the DEFSYSTEM wars, and deservedly so.

      p. 41: I don’t know anyone who still sticks to 80 columns. 120 seems to be the modern standard.

    18. 33

      I disagree with a lot of what Stroustrup says, but I do like one comment from him (paraphrasing from memory):

      There are two kinds of technologies, the ones everyone hates and the ones that no one uses.

      Any useful technology is going to have to make a load of compromises to be useful. I consider it a good rule of thumb that, if you couldn’t sit down and write a few thousand words about all of the things you hate about a technology, you probably don’t understand it well enough to recommend it.

      1. 68

        I’ve come to dislike that comment, and would put it in the same category as “everything is a tradeoff”. It’s a thought-terminating cliche that’s used to fend off criticism and avoid introspection.

        There are such a thing as bad engineering decisions. Not everything was made with perfect information, implemented flawlessly to spec, with optimal allocation of resources. In fact many decisions are made on the bases of all kinds of biases, including personal preferences, limited knowledge, unclear goals, a bunch of gut feeling, etc.

        And even a technology with good engineering decisions can turn a lot worse over time, e.g. when fundamental assumptions change.

        1. 16

          I agree with you, but I’d like to defend the phrase “everything is a tradeoff” please!

          To me, the natural corollary is that you should decide which set of tradeoffs are best for you.

          All of the things you said are true but you can avoid a lot of pitfalls by being aware of what you are optimising for, what you are giving up, and why that might be appropriate in a given situation.

          1. 5

            You said it already

            being aware of what you are optimising for

            That is better than “everything is a tradeoff” and makes the pithy statement less pithy and more actionable.

          2. 3

            you should decide which set of tradeoffs are best for you.

            (neo)Confucianism teaches that solutions don’t exist, rather just sets of trade offs. E.g. you can choose to have your current problem, or the problem which will occur if “solve” it.

            1. 3

              What’s your background with Confucianism? I would say it’s fairly optimistic about human perfectibility. It’s maybe not utopian, but a sage king can rightly order the empire with the Mandate of Heaven at least. Or do you mean more contemporary Confucian inspired thought not classical (c300BCE) or neo (c1000CE)?

              1. 3

                Neoconfucianism (in Chinese, study of “li”) synthesized it with Daoism and Buddhism (the 3 teachings). Wu wei is an important aspect of that li (logos, natural law) channeling the Zhuangzi’s pessimism. Yangmingzi riffs on this, believing in random action (experimenting?) but not consciously planning/acting towards plans. You’re to understand the trade offs etc. and map the different ways destiny may flow, but not act on them. Original Confucianism had a more limited focus (family first) which Zhang Zai extended, by treating everything as a bigger family, allowing Confucian approaches to apply to other domains.

                One 4 character parable/idiom (which means “blessing in disguise”) has:

                1. lose horse - poor
                2. horse comes back with horse friends - richer
                3. break leg - bad
                4. don’t get drafted - good

                background

                Wing-tsit Chan and Lin Yutang made great translations and discussions on the history of ideas in Chinese thought. Though I may read Chen Chun, it’s really through their lenses as my Chinese isn’t yet up to snuff.

                1. 2

                  Okay. I wasn’t sure if by “neo” you meant like Daniel Bell’s New Confucianism.

                  I would say Wang Yangming is pretty optimistic about solutions. He’s against theoretical contemplation, but for the unity of knowledge and conduct, so ISTM the idea is you solve problems by acting intuitively. I’m sure if pressed he would acknowledge there are some tradeoffs, but I don’t see him as having a very pessimistic view or emphasizing the tradeoffs versus emphasizing perfecting your knowledge-conduct.

                  1. 2

                    Thank you, I dug into this a bit deeper. I believe you are right and I have been misunderstanding some aspect of will./intention, which I struggle to articulate. Laozi and everything later building on it do seem to focus on (attempts to) control backfiring. I’m not sure if my pick-tradeoffs-lens is a productive innovation or missing the point. (How broadly/narrowly should we apply things?)

        2. 6

          I’m also tired of hearing “everything is a trade-off” for that reason. I definitely like the phrase “thought-terminating cliche”.

          It’s also not true. “Everything is a trade-off” implies that everything is already Pareto-optimal, which is crazy. Lots of things are worse than they could be without making any compromises. It even feels arrogant to say that anything is any kind of “optimal”.

          1. 2

            That was exactly my point, thanks for nailing it concisely.

          2. 1

            that pareto-optimality explanation site is fantastic

        3. 4

          There are such a thing as bad engineering decisions.

          Of course, I think people mostly mean that there are bad approaches, but no perfect ones. (Or, more mathematically, the “better than” relation is often partial.)

        4. 2

          Both sides phrases are inportant and meaningful, yes people can overuse them, and people can also fail to understand that “changing this behavior to be ‘sensible’” also is a trade off as changing behaviour can break existing stuff.

          We can look at all sorts of things where the “trade off” being made is not obvious:

          • lack of safety in C/C++: yay performance! Downside: global performance cost due to myriad mitigations in software (aslr, hardened allocators, …) and hardware (pointer auth, mte, cheri, …) cost performance (power and speed) for everything

          • myriad weird bits of JS - mean lots of edge cases in the language, though in practice the more absurd cases aren’t hit and basic changes to style choices mitigate most of the remainder, so the cost of removing the behavior is unbounded and leaving it there has little practical cost

          • removing “print” statements from Python 3: made the language more “consistent” but imo was one of the largest contributors to just how long the 2->3 migration took, but was also entirely unnecessary from a practical point of view as a print statement is in practice distinguishable from a call

          At the end of the day you might disagree with my framing/opinion of the trade offs being made, but they’re still trade offs, because trade offs are a fundamental part of every design decision you can ever make.

          There’s nothing thought terminating about “everything is a trade off”, claiming that it is is itself thought terminating: it implies a belief that the decisions being made are not a trade off and that a decision is either right or wrong. That mentality leads to inflexibility, and arguably incorrect choices because it results in a design choices that don’t consider the trade offs being made.

          1. 3

            “changing this behavior to be ‘sensible’” also is a trade off as changing behaviour can break existing stuff.

            But what about the time when the decisions were actually made? What technical, calculated trade-offs did JS make when implementing its numerous inconsistencies, that are collectively seen as design failures?

            claiming that it is is itself thought terminating: it implies a belief that the decisions being made are not a trade off and that a decision is either right or wrong

            I definitely think some decisions can be right or wrong.

            1. 7

              But what about the time when the decisions were actually made? What technical, calculated trade-offs did JS make when implementing its numerous inconsistencies, that are collectively seen as design failures?

              The key tradeoff made in the development of JavaScript was to spend only a week and a half on it, sacrificing coherent design and conceptual integrity in exchange for a time-to-market advantage.

              1. 4

                This isn’t really true. The initial prototype was made in 10 days but there were a lot of breaking changes up to Javascript 1.0 which was released a year later. Still a fairly short time frame for a new language but not exactly ten days.

              2. 3

                I often wonder what development would be like now if Brendan Eich had said “no, I can’t complete it in that time, just embed Python”.

                1. 1

                  I don’t think Python even had booleans at that point. IMHO the contemporary embedding language would have been Tcl, of all things!

                  1. 1

                    The starting point was Scheme, so that would probably have been the default choice if not implementing something custom.

                    1. 2

                      At least there the weird quirks would’ve (hopefully) gotten fixed as bugs, because it has an actual language spec. OTOH, it might also have begotten generations of Scheme-haters and parenthophobes, or Microsoft’s Visual Basicscript would’ve taken off and we’d all be using that instead. Not sure what’s worse…

                  2. 1

                    When was Microsoft adding VBScript to IE?

              3. 1

                I’m not sure there’s an actual trade-off there. Don’t you think it’s possible to come up with a more coherent design in that timeframe?

                1. 2

                  Unlikely given the specific constraints Eich was operating under at the time.

            2. 1

              But what about the time when the decisions were actually made? What technical, calculated trade-offs did JS make when implementing its numerous inconsistencies, that are collectively seen as design failures?

              Some behaviors are not the result of “decisions”, they’re just happenstance of someone writing code at the time without considering the trade offs because at the time they did not recognize that they were making a decision that had trade offs.

              You’re saying there are numerous inconsistencies that were implemented, but that assumes that the inconsistencies were implemented, rather than an unexpected interaction of reasonable behaviors, without knowing the exact examples you’re thinking of I can’t speak to anything.

              I definitely think some decisions can be right or wrong.

              With the benefit of hindsight, or with a different view of the trade offs. Do you have examples of things where the decision was objectively wrong and not just a result of the weight of trade offs changing over time, such that the trade offs made in the past would not be made now?

              1. 1

                A good example that happens all the time in the small is doing redundant work, mostly because you’re not aware it’s happening. Cloning data structures too often, verifying invariants multiple times, etc. I’ve seen a lot of cases where redundancies could be avoided with zero downside, if the author had paid more attention.

      2. 21

        This makes me think how beautiful it is that crypto developers have managed to make NFT’s into not only something everyone hates but something nobody uses, at the same time

      3. 21

        The quote you refer to is: “There are only two kinds of programming languages: those people always bitch about and those nobody uses.”

        Another good one is “For new features, people insist on LOUD explicit syntax. For established features, people want terse notation.”

        1. 6

          Another good one is “For new features, people insist on LOUD explicit syntax. For established features, people want terse notation.”

          I’ve never heard this one before! It’s really good.

          1. 3

            Here’s the primary source on that one: https://www.thefeedbackloop.xyz/stroustrups-rule-and-layering-over-time/

            Looks like there’s some sort of error though. I’m on my phone so I’m not getting great diagnostics hah.

      4. 9

        I consider it a good rule of thumb that, if you couldn’t sit down and write a few thousand words about all of the things you hate about a technology, you probably don’t understand it well enough to recommend it.

        As a core element of good critical thinking, one should hypothetically be able to write such a criticism about anything they are a fan of. In fact, I encourage everyone to try this out as often as possible and push through the discomfort.

        Notice I used the dreaded word “fan” there- which is the point of this comment: There should be a key distinction between someone who is a “fan” of a technology based on a critical evaluation of its pros and cons and someone who is a “fan” of a technology based on a relatively rosy assessment of its pros and a relatively blind assessment of its cons.

        I think the OP blogger is really complaining about the latter. And, all other things being equal, I believe a developer using a technology chosen via critical assessment by a fan will always lead to superior work relative to a technology chosen via critical assessment by a non-fan. The fan, for example, will be motivated to know and understand things like the niche micro-optimizations to use that don’t make the code less readable (I’m thinking of, for example, the “for” construct in Elixir), and will likely use designs that align closer to the semantics of that particular language’s design than to languages in general.

        One of the reasons I left Ruby and went to Elixir is that the “list of valid and impactful criticisms” I could come up with was simply shorter (and significantly so) with Elixir. (Perhaps I should blogpost a critical assessment of both.) And yes, I went from being a “fan” of Ruby to a “fan” of Elixir, but I can also rattle Elixir’s faults off the top of my head (slowish math, can’t compile to static binary, complex deployment, depends on BEAM VM/Erlang, still a bit “niche”, functional semantics more difficult to adopt for new developers, wonky language server in VSCode, no typing (although that’s about to change somewhat), not as easy to introspect language features as Ruby, etc.)

        The other point I’d like to make is that even though “everything is a compromise,” there are certainly locally-optimal maxima with the more correct level of abstraction and the more correct design decisions. Otherwise we should all just code in Brainfuck because of its simple instruction set or in assembly because of its speed.

        1. 6

          I think the distinction would be that I wouldn’t really call you a “fan” of Ruby or Elixir if you’re making these considered decisions, weighing the trade-offs, and considering whether they’re appropriate more-or-less dispassionately. You can certainly like languages, but I think if you call someone a “fan” of something, there’s an implication of a sort of blind loyalty. By analogy to sports fans, where a fan always supports their team, no matter who they’re up against, a fan of a particular technology is someone who always supports their particular technology, rails against those who criticize it, and hurls vitriol against “opponents” of their tool of choice.

          1. 4

            Alright. Interesting distinction/clarification that gave me an idea.

            So, in thinking of my Apple “fandom” that has been pretty consistent since I was 12 in 1984 when my family was simultaneously the last one in the neighborhood to get a family computer and the first ones to get a Mac (128k) and I was absolutely fucking enthralled in a way I cannot describe… which persisted through the near-death of 1997 and beyond into the iPod and iPhone era…

            I think it has to do with “love”, frankly. If you “love” something, you see it through thick and thin, you stick around through difficulties, and you often (in particular if you contribute directly to the good quality of the thing, or the quality of its use, or its “evangelism”, or its community) literally believe the thing into a better version of itself over time.

            The “likers”, in essence, value things based on the past and current objective value while the “lovers” (the fans) value things based on the perceived intrinsic and future value.

            And the latter is quite irrational and thus indefensible and yet is the fundamental instrument of value creation.

            But can also lead to failure. As we all know. The things the “likers” use are less risky.

            Does this distinction make sense?

            1. 7

              One factor is that many software development projects are much closer to bike sheds than to skyscrapers. If someone is a fan of, say, geodesic domes (as in, believes that their current and/or future value is underrated), there is no reason not to try that in constructing a bike shed — it’s unlikely that whatever the builder is a fan of will completely fail to work for the intended purpose. The best outcome is that the technology will be proved viable or they will find ways to improve it.

              If people set out to build a skyscraper from the start, then sure, they must carefully evaluate everything and reject unacceptable tradeoffs.

              When people build a social network for students of a single college using a bikeshed-level technology stack because that’s what allowed them to build it quickly on zero budget and then start scaling it to millions users, it’s not the same problem as “started building a skyscraper from plywood”.

              1. 3

                OTOH, no sane architect or engineer would expand a bikeshed into a skyscraper by continuing with the same materials and techniques. They’d probably trash the bikeshed and start pouring a foundation, for starters…

                1. 2

                  Exactly. When managers or VCs demand that a bikeshed is to be expanded into a skyscraper using the same materials, it’s not an engineering problem. Well, when engineers choose to do that, it definitely is. But a lot of the time, that’s not what happens.

        2. 2

          Thank you! The “critical thinking” aspect is I think muddied by the article by setting up an ingroup/outgroup dichotomy with engineers on one side and fans on the other.

          It’s normal to be a fan of something while also being fully aware of its trade-offs. Plus, sometimes an organization’s inertia needs the extra energetic push of advocacy (e.g. from a fanatic) to transition from a “good enough / nobody got fired for buying IBM” mentality into a better local optimum.

          The mindset of “everything is a trade-off” is true but can also turn into a crutch and you end up avoiding thinking critically because oh well it’s just some trade-offs I can’t be bothered to fully understand.

          “Engineers” and “fans” don’t look at the same trade-offs with different-colored glasses, they actually see different sets of trade-offs.

      5. 2

        if you couldn’t sit down and write a few thousand words about all of the things you hate about a technology, you probably don’t understand it well enough to recommend it.

        I would add: you should also be able to defend the options you didn’t choose. If someone can give a big list of reasons why Go is better than Rust, yet they still recommend Rust for this project, I’m a lot more likely to trust them.

      6. 2

        There are two kinds of technologies, the ones everyone hates and the ones that no one uses.

        This is true. I remember people hating Java, C++ and XML. Today I more often meet people hating Python, Rust and YAML. Sometimes it is the same people. Older technologies are well established and the hatred has run out. People have got used to it and take it for what it is. Hyped technologies raise false hopes and unrealistic expectations, which then lead to disappointment and hate.