Threads for losvedir

    1. 32

      I really don’t want to be this grumpy old man, but.. can we just stop comparing this two, completely unrelated languages? Where are all the Zig vs Haskell posts? It makes just as much sense. Go is simply not a low-level language, and it was never meant to occupy the same niches as Rust, that makes deliberate design tradeoffs to be able to target those use cases.

      With that said, I am all for managed languages (though I would lie if I said that I like Go - would probably be my last choice for almost anything), and thankfully they can be used for almost every conceivable purpose (even for an OS, see Microsoft’s attempts), the GC is no problem for the vast majority of tasks. But we should definitely be thankful for Rust as it really is a unique language for the less often targeted non-GC niche.

      1. 33

        i would read a zig vs haskell post

      2. 23

        You should understand the context a little bit: Shuttle is a company focused on deploying backend applications in Rust. Backend applications (server programs) is precisely the main use-case for Go. It absolutely makes sense for them to compare the two in this context.

        1. 10

          Java is an even more common choice for backend development, but fair enough.

          My gripe is actually the common, hand-in-hand usage of this “rust’n’go” term, which I’m afraid is more often than not depends on a misconception around Go’s low-levelness, as it was sort of marketed as a systems language.

          1. 10

            It is a “systems language”… by the original definition where low-levelness was a side-effect of the times and the primary focus was long-term maintainability and suitability for building infrastructural components.

            (Which, in the era of network microservices, is exactly what Go was designed for. Java, Go, and Rust are three different expressions of the classical definition of a “systems language”.)

            That said, there has been a lot of misconception because it’s commonly repeated that Go was created “to replace C++ as Google uses it”… which basically means “to replace Python and Ruby in contexts where they can’t scale enough, so you used C++ instead”.

            1. 5

              Great link, thanks!

              BTW Rob Pike is cited in it where he regrets calling Go a systems language, but that’s probably because he used it in the original sense, and the meaning has drifted.

      3. 5

        Go is simply not a low-level language, and it was never meant to occupy the same niches as Rust, that makes deliberate design tradeoffs to be able to target those use cases.

        Go was originally designed as a successor to C. The fact that it’s been most successful as a Python replacement was largely accidental. It does (via the unsafe package) support low-level things but, like Rust, tries to make you opt in for the small bits of code that actually need them.

        1. 5

          Go was originally designed as a successor to C. The fact that it’s been most successful as a Python replacement was largely accidental. It does (via the unsafe package) support low-level things but, like Rust, tries to make you opt in for the small bits of code that actually need them.

          Go had three explicit, publicized design goals from the start; none of them were “be a successor to C”.

          They were written relative to the three languages in use at google:

          • Expressive like c++/python
          • Fast like java/c++
          • Fast feedback cycles (no slow compile step) like java/python
          1. 11

            Go had three explicit, publicized design goals from the start; none of them were “be a successor to C”.

            Rob Pike was pretty explicit that it was intended as a C successor. He said this several times in various things that I heard and read while writing the Go Phrasebook. His goal was to move C programmers to a language with a slightly higher level of abstraction. He sold it to Google relative to their current language usage.

            1. 2

              In that case, I suppose I must defer to your closer experience!

          2. 2

            They really dropped the ball on that expressivity part. It’s as expressive as C, which is.. not a good thing.

      4. 1

        The article is about a DB-backed microservice. I’m curious whether you feel rust or go is inappropriate for this domain.

    2. 3

      C#/Go relative performance surprises me. I always thought that these are very similar languages in terms of runtime characteristics (value types, gc, AOTish compilation model, user-land concurrency), with C# being significantly more mature.

      C#’s throughput for heavy case and memory usage seems worse than Go, which is unexpected — I’d expect C# codegen and gc to be marginally better.

      For light case, C# latency percentiles are better, and that’s very surprising as it seems like the case that Go is specifically optimizing for.

      What am I missing from my Go/C# model?

      1. 9

        Glancing quickly at the code, the C# is very unidiomatic (not surprising, since the author admits they’re new to .NET). A lot of that’s stylistic, but the heavy use of mutable structs with getters, rather than readonly structs with direct access, is gonna result in some poor cache performance, and may account for a lot of the discrepancy. (I also suspect some alternative data structures would perform better, but I’d need to check first.)

      2. 3

        At first glance, I had the same thought. But looking closely, it seems that C# and Go had very similar actual performance and throughput results, with worse outliers for C# (GC related, I would guess).

        The weird outliers to me were Swift and Scala, which both seem to get panned quite regularly by people actually trying to use them for the first time. Yet what’s weird (logically irreconcilable) to me is how people who use them all of the time seem to have no major complaints at all.

      3. 2

        I always thought that these are very similar languages in terms of runtime characteristics

        That seems to be reflected by the data. I didn’t have a look at the code, but concluding from the readme the OP doesn’t seem to have much experience at least with .NET, so the result might rather reflect the level of experience, not the achievable performance. Anyway, even between different versions of the CLR there are significant performance differences (see e.g. https://www.quora.com/Is-the-Mono-CLR-really-slower-than-CoreCLR/answer/Rochus-Keller), and if you add the framework there are even bigger differences.

      4. 2

        What am I missing from my Go/C# model?

        I think what’s missing is the repo got traction at somewhat of an unfortunate time, with a few optimization tweaks to rust and Go and Elixir, and none yet to C# or Scala. I first posted it here after completing my naive, unidiomatic implementations in every language, but it didn’t take off.

        I’d say with that version of the code both dotnet and Go implementations were roughly comparable as was the performance, with dotnet having a slight edge. That’s why I posted on /r/rust asking why dotnet was beating rust.

        Following that discussion I made two main changes to the rust which nearly doubled the performance and shot rust to the top: 1) in the response handler, having the TripResponse and ScheduleResponse use a &str of the underlying Trip and StopTime data, rather than cloning strings, and 2) having the TripResponse vec and nested ScheduleResponse vec initialize with the correct capacity which is known ahead of time, rather than starting empty and growing as I appended new items.

        (Oh, and enabling LTO was another boost.)

        Since I day-to-day program in Elixir and typescript, it didn’t really occur to me just how impactful initializing the Vec with the full capacity will be, since that’s not even a thing you can do in those languages. After slapping my forehead and seeing its effect on the rust performance, I made the same change to Go, and it shot up some 30% in requests per second.

        That’s when the someone else re-posted the repo to HN and it took off.

        I expect once I make that same change to C# and re-benchmark, the numbers will be roughly comparable once again. So I think what you’re seeing is the Go and C# implementations have that pretty significant difference right now.

        1. 1

          Aha, thanks, this indeed explains the thing!

          I’d say with that version of the code both dotnet and Go implementations were roughly comparable as was the performance, with dotnet having a slight edge

          Is what I’d expect, and looks like that exactly what happened here! That’s very interesting data-point, for “time-to-performance”

      5. 2

        My guess is that C#, being object oriented, likes to use references when not necessary, which has a significant effect on time and memory usage.

      6. 1

        Both Go and C# have value types, which are not address-taken and so don’t incur any penalty for GC but do incur some overhead from copying. Performance in both languages can vary significantly depending on the degree to which you make use of these.

    3. 2

      Continuing my project of rebuilding the same simple transit data app in different programming languages to try to get a feel for them. I’ve finished Elixir, Deno, Rust, and Go, and am hoping to do C# and Java this week, at which point I’ll be done for a while, I think.

    4. 1

      This is very cool! I only just started reading the section 0 intro, but one topic that I didn’t see covered that’s interesting to me about Typescript is gradual typing. How does that fit into a type system? How do the typed sections interact with the untyped, and what kind of options are there for the semantics around that?

      1. 2

        TypeScript’s type system is best effort and is not actually sound. Anders is quite open about this - he wants to create a language that people want to use, and since this is his third successful programming language I’m willing to believe pretty much anything he says about language design. TypeScript’s big advantage here is that it compiles down to JavaScript, which does dynamic type checking, and so any unsoundness in TypeScript’s type system is caught at run time. This lets them focus on making the common cases easy to use, at the expense of making a few really uncommon cases fail. In particular, TypeScript’s idea of equality for recursive types in generics gives up after a certain depth and says ‘sure, these things are the same type’. You might get an exception at run time if they aren’t. The reference says this explicitly on soundness:

        TypeScript’s type system allows certain operations that can’t be known at compile-time to be safe. When a type system has this property, it is said to not be “sound”. The places where TypeScript allows unsound behavior were carefully considered, and throughout this document we’ll explain where these happen and the motivating scenarios behind them.

        Specifically in terms of gradual typing, I believe it just allows any untyped object to be cast to any type. This is fine for the same reason: if you got it wrong (and you can dynamically query whether a object matches a type, so it probably is your fault if you got it wrong) then it will be caught at run time. Here’s a simple example:

        function mk()
        {
            var a : any = {};
            a.foo = 12;
            return a;
        }
        
        function x(obj : {bar : number})
        {
            console.log(obj.bar);
        }
        
        x(mk());
        

        The mk function returns an object of type any', which is an unconstrained type. You pass it to xand there are some constraints and TypeScript accepts this, but at run time this will printundefined. If you change the definition of mk` to this:

        function mk() : { foo : number }
        

        Now the static type is something that has a foo field that is a number. The type checker will now say:

        Argument of type '{ foo: number; }' is not assignable to parameter of type '{ bar: number; }'.
          Property 'bar' is missing in type '{ foo: number; }' but required in type '{ bar: number; }'.
        

        This is an explicit design choice because TypeScript has to fit in an ecosystem with JavaScript libraries. If you couldn’t use JavaScript from TypeScript before you’d added type information to every single JavaScript function then the language would be unusable. I don’t think that you’d end up here if you were designing a language from scratch. The dynamic checking that TypeScript requires incurs a performance (or, at best, a JIT-complexity) cost but TypeScript can get away with it because it’s running on a JavaScript implementation that is already paying this cost.

        Even with these limitations, with TypeScript Anders and friends have managed to get normal programmers to be enthusiastic about using a language with a structural and algebraic type system, which makes me incredibly happy.

    5. 3

      Love it, thanks! I browse lobsters exclusively on my OLED phone, so I totally dig the black background. Wouldn’t mind less bright text, but I can live with this, too.

    6. 14

      Ha, is the name a reference to Avatar the Last Airbender?

    7. 12

      Is there a summary or a text version?

    8. 5

      This seems fun, and maybe a good tool for build proof of concepts. But I hardly see it as being useful for large projects. Or have I become old and grumpy?

      1. 13

        As a stranger on the internet, I can be the one to tell you that you are old and grumpy.

        Ruby is definitely unusable without syntax highlighting… (Sadists excepted) Java is definitely unusable without code completion… (Sadists excepted) Whatever comes next will probably be unusable without this thing or something like it.

        1. 9

          I’m confused… Ruby has one of the best syntaxes to read without highlighting. Not as good as forth, but definitely above-average

          1. 2

            I used to think this way. Then I learned Python and now I no longer do.

            When I learned Ruby I was coming from Perl, so the Perl syntactic sugar (Which the Ruby community now seems to be rightly fleeing from in abject terror) made the transition much easier for me.

            I guess this is my wind-baggy way of saying that relative programming language readability is a highly subjective thing, so I would caution anyone against making absolute statements on this topic.

            For instance, many programmers not used to the syntax find FORTH to be an unreadable morass of words and punctuation, whereas folks who love it inherently grok its stack based nature and find it eminently readable.

            1. 1

              Oh, sure, I wasn’t trying to make a statement about general readability, but about syntax highlighting.

              For example, forth is basically king of being the same with and without highlighting because it’s just a stream of words. What would you even highlight? That doesn’t mean the code is readable to you, only that adding colour does the least of any syntax possible, really.

              Ruby has sigils for everything important and very few commonly-used keywords, so it comes pretty close also here. Sure you can highlight the few words (class, def, do, end, if) that are in common use, you could highlight the kinds of vars but they already have sigils anyway. Everything else is a method call.

              Basically I’m saying that highlighting shines when there are a lot of different kinds of syntax, because it helps you visually tell them apart. A language with a lot of common keywords, or uncommon kinds of literal expressions, or many built-in operators (which are effectively keywords), that kind of thing.

              Which is not to say no one uses syntax highlighting in ruby of course, some people find that just highlighting comments and string literals makes highlighting worth it in any syntax family, I just felt it was a weird top example for “syntax highlighting helps here”.

              1. 3

                Thank you for the clarification I understand more fully now.

                Unfortunately, while I can see where you’re coming from in the general case, I must respectfully disagree at least for myself. I’m partially blind, and syntax highlighting saves my bacon all the time no matter what programming language I’m using :)

                I do agree that Ruby perhaps has visual cues that other programming languages lack.

                1. 1

                  ’m partially blind, and syntax highlighting saves my bacon all the time no matter what programming language I’m using :)

                  If you don’t mind me asking - have you tried any Lisps, and if so, how was your experience with those? I’m curious as to whether the relative lack of syntax is an advantage or a disadvantage from an accessibility perspective.

                  1. 1

                    Don’t mind you asking at all.

                    So, first off I Am Not A LISP Hacker, so my response will be limited to the years I ran and hacked emacs (I was an inveterate elisp twiddler. I wasted WAY too much time on it which is why I migrated back to Vim and now Vim+VSCode :)

                    It was a disadvantage. Super smart parens matching helped, but having very clear visual disambiguation between blocks and other code flow altering constructs like loops and conditionals is incredibly helpful for me.

                    It’s also one of the reasons I favor Python versus any other language where braces denote blocks rather than indentation.

                    In Python, I can literally draw a veritcal line down from the construct and discern the boundaries of the code it effects. That’s a huge win for me.

                    Note that this won’t eventually keep me from learning Scheme, which I’d love to do. I’m super impressed by the Racket community :)

              2. 1

                For example, forth is basically king of being the same with and without highlighting because it’s just a stream of words. What would you even highlight? That doesn’t mean the code is readable to you, only that adding colour does the least of any syntax possible, really.

                You could use stack effect comments to highlight the arguments to a word.

                : squared ( n -- n*n ) 
                     dup * ;
                 squared 3 .  
                

                For example, if squared is selected then the 3 should be highlighted. There’s also Chuck Moore’s ColorForth which uses color as part of the syntax.

          2. 3

            Well, this is the internet. Good luck trying to make sense of every take.

        2. 6

          Masochists (people that love pain on themselves), not sadists (people that love inflicting pain on others).

          1. 2

            Ah, thank you for the correction.

            I did once have a coworker who started programming ruby in hungarian notation so that they could code without any syntax highlighting, does that work?

            1. 4

              That count as both ;)

        3. 2

          Go to source is probably the only reason I use IDEs. Syntax highlighting does nothing for me. I could code entirely in monochrome and it wouldn’t affect the outcome in the slightest.

          On the other hand, you’re right. Tools create languages that depend on those tools. Intellij is infamous for that.

      2. 6

        You’re old and grumpy :) But seriously, the fact that it’s restricted to Github Codespaces right now limits its usefulness for a bunch of us.

        However, I think this kind of guided assistance is going to be huge as the rough edges are polished away.

        Will the grizzled veterans coding exclusively with M-x butterflies and flipping magnetic cores with their teeth benefit? Probably not, but they don’t represent the masses of people laboring in the code mines every day either :)

        1. 4

          I don’t do those things, I use languages with rich type information along with an IDE that basically writes the code for me already. I just don’t understand who would use these kinds of snippets regularly other than people building example apps or PoCs. The vast majority of code I write on a daily basis calls into internal APIs that are part of the product I work on, those won’t be in the snippet catalog this things uses.

          1. 4

            I don’t doubt it but I would also posit that there are vast groups of people churning out Java/.Net/PHP/Python code every day who would benefit enormously from an AI saying:

            Hey, I see you have 5 nested for loops here. Why don’t we re-write this as a nested list comprehension. See? MUCH more readable now!

            1. 4

              The vast majority of code I write on a daily basis calls into internal APIs that are part of the product I work on, those won’t be in the snippet catalog this things uses.

              Well, not yet. Not until they come up with a way to ingest and train based on private, internal codebases. I can’t see any reason to think that won’t be coming.

            2. 2

              Oh sure, I agree that’s potentially (very) useful, even for me! I guess maybe the problem is that the examples I’ve seen (and admittedly I haven’t looked at it very hard) seem to be more like conventional “snippets”, whereas what you’re describing feels more like a AST-based lint that we have for certain languages and in certain IDEs already (though they could absolutely be smarter).

            3. 2

              Visual studio (the full ide) has something like this at the moment and it’s honestly terrible. Always suggests inverting if statements which break the logic, or another one that I haven’t taken the time to figure out how to disable is it ‘highlights’ with a little grey line at the side of the ide (where breakpoints would be) and suggests changes such as condensing your catch blocks from try/catches onto one line instead of nice and readable.

              Could be great in the future if could get to what you suggested!

          2. 3

            Given that GH already has an enterprise offering, I can’t see a reason why they can’t enable the copilot feature and perform some transfer learning on a private codebase.

          3. 1

            Is your code in GitHub? All my employer’s code that I work on is in our GitHub org, some repos public, some private. That seems like the use case here. Yeah, if your code isn’t in GitHub, this GitHub tool is probably not for you.

            I’d love to see what this looks like trained on a GitHub-wide MIT licensed corpus, then a tiny per-org transfer learning layer on top, with just our code.

            1. 1

              Yeah, although, to me, the more interesting use-case is a CI tool that attempts to detect duplicate code / effort across the organization. Not sure how often I’d need / want it to write a bunch of boilerplate for me.

      3. 1

        it feels like a niftier autocomplete/intellisense. kind of like how gmail provides suggestions for completing sentences. I don’t think it’s world-changing, but I can imagine it being useful when slogging through writing basic code structures. of course you could do the same thing with macros in your IDE but this doesn’t require any configuration.

    9. 5

      I’ve been using an M1 for a while now. Screen, battery, and performance are great (but performance is not spectacular, there is still a lot of lag, just a lot less than you’re used to). Would have liked a USB port. Didn’t like the software. There’s no proper package manager (wasn’t very pleased with brew) hotkeys are very weird, no ‘snap to left side of the screen’, safari misses many features (like print selection or changing html), I have to reinstall the printer drivers after each update, and I often get random error messages in the terminal.

      1. 4

        I have to reinstall the printer drivers after each update

        Something I unfortunately didn’t know until quite recently: most printer/scanner driver packages for Macs are worthless, because macOS already knows how to talk to most printers and scanners. Especially most scanner drivers are bad to install because there are better scanning packages available that just use the OS’ own scanning framework.

        (I wish I had found this out years ago)

      2. 4

        I have to reinstall the printer drivers after each update

        Are you sure you need printer drivers? I’ve used Macs for 20+ years and various printers, and can’t remember the last time I had to do that (though I remember quite well being annoyed at having to do it on Windows). Are you not connected over USB or three network?

        1. 2

          Good suggestion, but I really need them to enable ‘manual duplex’ printing.

      3. 3

        safari misses many features (like print selection or changing html)

        What do you mean Safari can’t change HTML? The developer tools can do that and more.

        1. 1

          Ah, so it’s an extension, that makes sense!

      4. 1

        wasn’t very pleased with brew

        Have you given MacPorts a try?

      5. 1

        and I often get random error messages in the terminal.

        Curious about this one. What kind of error messages?

        1. 1

          I should have said ‘warning messages’! But what I get a lot is:

          objc[849]: Class AMSupportURLConnectionDelegate is implemented in both /usr/lib/libauthinstall.dylib (0x1fce89160) and /System/Library/PrivateFrameworks/MobileDevice.framework/Versions/A/MobileDevice (0x1166202b8). One of the two will be used. Which one is undefined.
          
      6. 1

        no ‘snap to left side of the screen’

        Check out https://rectangleapp.com/

    10. 5

      Rationals would be a great default number type. When fractions are available you have to go way out of your way: Fraction(1, 10). Why shouldn’t 0.1 mean exactly one tenth?

      Floats can have the ugly syntax: float(0.1). You’re rounding 1/10 to the nearest float.

      1. 3

        The problem is that only rationals which divide nicely by 10 have such nice syntax. For example, 1/3 cannot be written down in decimal point notation (as it would be 0.333 followed by an infinite number of threes). So, it makes more sense to use the fractional syntax for rational numbers and the decimal point syntax for floating-point numbers.

        Of course, you can have your cake and eat it too: Lisps use exactly this syntax: 1/3 for rational numbers. It’s slightly ugly when you get larger numbers, because you can’t write 1 1/3. Instead, you write 4/3, appears rather unnatural. I think 1+1/3 would’ve been nicer and would have been consistent with complex number syntax (i.e. 1+2i). But it does complicate the parser quite a bit. And in infix languages you can’t do this because of the ambiguity of whether you meant 1/3 or (/ 1 3). But one could conceive a prefix syntax like r1/3 or so.

        It’s unfortunate that the floating-point notation us humans prefer to use is base 10, while the representation in a computer is base 2, because these don’t divide cleanly, hence the weirdness of how 0.3 gets read into a float.

        1. 5

          Instead, you write 4/3, appears rather unnatural.

          Unnatural? Nah. Maybe a bit improper, though.

        2. 2

          4/3 appears rather unnatural

          Matter of opinion.

          notation us humans prefer to use is base 10, while the representation in a computer is base 2, because these don’t divide cleanly, hence the weirdness of how 0.3 gets read into a float

          Decimal formats are a thing. Supported by some databases and used for some financial work. Ultimately it doesn’t solve the problem of ‘I want to represent arbitrary fractions with perfect fidelity’. That being said you can go further in that direction by using a more composite number; neither 10 nor 2, but maybe 12.

          in infix languages you can’t do this […] prefix syntax like r1/3 or so

          Better solution: use a different separator. E.g. in j: 1r3.

          1. 1

            Decimal formats are a thing.

            True, but I don’t know of any popular programming language which uses them as the native representation of floating point numbers.

            It works well enough for raku.

            How does it distinguish between a division operation on two numbers (which may well result in a rational number) and a rational literal?

            1. 1

              As far as I know about Raku.

              When you write 0.3 in Raku, it is considered as a Rational and not a Float that is why 0.2 + 0.1 = 0.3 and division operator convert it also internally as a Rational (3/2).WHAT => Rat or (1/3).WHAT => Rat. Use scientific notation to create a double directly (the type will be Num). For arbitrary precision rational number, you will use FatRat type.

              Rational number in Raku from Andrew Shitov course

              Floating-point number in Raku from the same course

    11. 5

      Disappointed no new MBP announcement, guess I’ll just have to wait longer.

      The Xcode Cloud announcement is interesting. Given that… is there any reason you couldn’t develop iOS apps on Linux/Windows?

      1. 3

        Sounds like both the management of the CI/CD workflow is entirely handled through Xcode, and uses Xcode’s build system to compile, test and build apps. I imagine it’s going to be difficult to do that anywhere except a Mac any time soon.

      2. 3

        Disappointed no new MBP announcement, guess I’ll just have to wait longer.

        Hardware will be tomorrow.

        1. 7

          If they announced hardware at WWDC, it was always during the keynote. As this is the only event with much media attention of WWDC, why would they announce new products on any other day.

          The „leakers“ have also stated, that there won‘t be any new hardware: https://www.macrumors.com/2021/06/07/no-hardware-at-wwdc-suggests-leaker/

        2. 2

          Will it be streamed? What time? I can’t find any info about a hardware presentation online.

    12. 12

      I wonder, if there are any plans regarding first class support for LSP (or, more generally and perhaps more useful, first class support for extensions providing semantic knowledge about the code).

      I know that LSP plugin exists, but, anecdotally, folks are having trouble with it. Which i think is understandable https://lsp.sublimetext.io/features/ says Show Code Actions: UNBOUND, and this is the second most useful thing in LSP (the first is extend selection), it’s not that the plugin is wrong: it’s just that you can do only so much if the editor lacks first class UI/UX concepts for features, required to expose LSP to the user.

      1. 3

        Yeah, first class LSP support is important to me, too. But from the beta discussion a month ago, it doesn’t sound like they want to: https://news.ycombinator.com/item?id=26647731

        I’m curious about your take on that response, given your expertise.

        1. 8

          Agree that, as a protocol, LSP is not great (it’s good enough, which means we are stuck with it now). That’s why it’s better not to support LSP per se, but to support extensions which are semantic aware.

          VS Code has the right architecture there. VS Code doesn’t implement LSP protocol. Instead, it provides structured extension API to, eg, display code actions or completions. These extensions are directly reflected in the editor’s UI (the 💡) but are not directly tied into LSP. It’s up to extension to bridge editor API and LSP server.

          That’s why I worry about upcoming build-in LSP in neovim: they seem to add LSP directly to the editor, which I don’t think is the best approach, given the systems effect of open source community.

          So far, it seems that ST does the opposite mistake: they delegate LSP support to the plugin, but they don’t provide structured plugin API on the editor side. If you look at code APIs, they have a lot of high level things like registerCallHierarchyProvider: https://code.visualstudio.com/api/references/vscode-api. ST provides mostly low-level APIs: https://www.sublimetext.com/docs/3/api_reference.html#sublime_plugin.WindowCommand.

          1. 1

            I disagree with you again, which worries me because you’re clearly the expert of the two of us :-).

            Maybe LSP for big languages is a whole different games than for small languages (or even for bespoke stuff). If you write the LSP implementation for Rust, you expect to be able to put significant effort into it, and to have people write decent extensions for most editors. In this case a semantic extension wrapping the LSP protocole can make sense and probably yields higher quality results.

            On the other hand, if, like me, you’re more excited about LSP because it lowers the barrier to entry significantly for smaller projects (random example: Idris, but I can cite more obscure stuff), then being able to interface with it directly is very useful. The language extension might still exist to provide basic syntax coloring (easier/faster than in LSP imho), and you get basic completion, goto def, etc. This will not rival Intellij, but it’ll be orders of magnitude better than nothing. And if the editor directly supports LSP at least it’ll be reasonably fast and the UI will be ok. VSCode makes this a pain in the butt because you can’t just test your LSP server, you have to write typescript, publish the extension, etc. It removes (some of) the work-saving benefits LSP was supposed to bring.

            An interesting possibility for more custom LSP servers is to have custom methods ($/<method>, iirc?) and then each editor plugin can wrap that to the tune of the editor. I wish LSP was a bit better designed, but we’re probably stuck with it indeed.

            1. 3

              I don’t think we are disagreeing: it’s indeed true that, for smaller languages, having one thing built in is better. But I think the small here needs to be small indeed. Like, if we take idris as an example, vscode has a couple of non trivial plugins for it:

              I completely agree also that plugin development workflow is absolutely bonkers for developers. Getting an access token to actually publish the extension is a real quest.

              However, experience for users is really nice: they get prompted to install a plugin, and then plugin guides them through the necessary setup (or it just works).

      2. 2

        I know that LSP plugin exists, but, anecdotally, folks are having trouble with it. Which i think is understandable https://lsp.sublimetext.io/features/ says Show Code Actions: UNBOUND, and this is the second most useful thing in LSP (the first is extend selection), it’s not that the plugin is wrong: it’s just that you can do only so much if the editor lacks first class UI/UX concepts for features, required to expose LSP to the user.

        Doesn’t UNBOUND here just mean “doesn’t have a default keybinding”? Several of the other “UNBOUND” actions have recommendations for a specific key you could bind it to.

        edit: yeah, I just gave this a keybinding with rust-analyzer and it works fine. I don’t really get why you think there’s missing “first class UI/UX concepts for features” here – it’s just the default preferences not setting this to anything in particular. There is a UI for presenting these actions to users.

        If this is just a complaint about a lack of out-of-the-box keybindings, that’s not really a sublime-specific problem, you see the same phenomenon in a lot of emacs and atom and vscode packages too – I think that’s just down to the difficulty of avoiding clobbering something no matter what you ship with, and not everybody sharing your assessment of the criticality of “code actions” (never use the thing, myself. Nor “extend selection”. I think what most people want out of LSP is type-aware completion, personally ¯_(ツ)_/¯ ).

        edit 2: and actually, it doesn’t even need a keybinding. Reverting to out-of-the-box settings (I had some other customizations running which turned this off), it defaults to showing code actions in-line with the code when your cursor is on the line – so there’s actually multiple UI/UX ways this is presented to the user and it is UX that’s surfaced out of the box. Really struggling to parse what your complaint is here, given that this is the default presentation in vscode as well

        1. 3

          My complaint is indeed about default UX. UX matters a lot. To give a sublime related example, everything you can do with multiple cursors, you can do with Emacs/Vim macros. The functionality is the same: applying edits in lock step to many places. The difference is in the UX, and it is enourmous.

          And yes, poor UX for semantic features is problem in every editor for except IntelliJ. I try to complain about everything I notice :)

          Not having shortcut assigned by default is a big UX problem. As a new user, I don’t know if shortcut is need and which one is convenient. It’s developer’s job to say: “hey, we have a thousand actions, but here are the ten most important ones. Note the default shortcuts we carefully chose for them, avoiding conflicts and making combinations easy to remember”.

          On a positive note, I’ve noticed that code actions UX is massively improved between 3 and 4. 4 now shows an indicator if code actions are available, and that’s the core idea of the lightbulb feature indeed. That’s exactly first-class support for semantic aware features I want to see more of.

          I think what most people want out of LSP is type-aware completion

          I agree here. And this is exactly the problem: people don’t know what tools are available, they can only want what they know about. So folks want code completion exactly because it just pops out there without any user interaction, so they can not not use it.

          Authors of tools generally have better idea about which features are important, because they spend a lot of time thinking about and working with them. Exposing this knowledge about effective workflows via polished out of the box UX is a requirement for making the the features create value for end users.

          1. 1

            Not having shortcut assigned by default is a big UX problem. As a new user, I don’t know if shortcut is need and which one is convenient. It’s developer’s job to say: “hey, we have a thousand actions, but here are the ten most important ones. Note the default shortcuts we carefully chose for them, avoiding conflicts and making combinations easy to remember”.

            Ok. Not sure I agree that code actions needed to ship with a keybinding given that it surfaces in the UX as a prompt but that’s fine.

            The thing is, though, this feels like it has nothing whatsoever to do with the original comment, which you spammed both here and at hackernews. In both places you wrote:

            it’s not that the plugin is wrong: it’s just that you can do only so much if the editor lacks first class UI/UX concepts for features, required to expose LSP to the user.

            The developers who decided not to ship Code Actions with a keybinding are the plugin authors, who are not the authors of the editor. And yet you wrote that the plugin wasn’t at fault and chalked it up to some implied deep failure of Sublime Text itself to provide “first class UI/UX concepts” that the plugin would have needed to expose Code Actions to the user. Again, what “concepts” are missing? The plugin could provide keybindings for it, if it so chose. It does provide a “lightbulb”-style UI for surfacing the actions by default. Nothing whatsoever appears to missing that prevents exposing Code Actions to the user.

            What’s missing? Because I get the distinct impression here you misunderstood the features list to mean the actions weren’t surfacable and then spammed an identical rant to a bunch of forums about it.

            1. 2

              Meta note: I don’t find the “spammed” wording helpful. On HN, I replied to a direct request for questions from ST developer. If I had seen the HN thread first, I wouldn’t have made a lobster comment. I put time to condense my relevant experience (like this bit of feedback) into a paragraph, it doesn’t feel great to see it dismissed as spam.

              Let me try to clarify. There are two things that I don’t know. As I don’t closely follow ST development, I enquire about them, also providing my, complimentary, view of an LS developer.

              • On a strategic level, I wonder what is ST position for “doing semantic stuff that VS Code does”. I was surprised to read nothing about it in ST4 announcement. I would expect to hear either “we find LSP ecosystem useful, so we’ll work on integrating with it better” or “LSP is clearly popular, but it simply can not provide the latency guarantees we need, so we are building our own thing, stay tuned” (the thing they could build is something a-la https://lobste.rs/s/ujr9mg/how_do_you_index_code_your_projects#c_buj3rg they already have all the infra for it, and only need to replace approximate syntax definitions with precise parsers).
              • On a code architecture level, I wonder what the relation between ST and LSP plugin is. My bit of feedback here is that it’d be best if things like Code Actions are concepts of the editor itself, but their implementation is left to a plugin (where LSP is one, but not the only, possible implementation). See how they are documented in VS Code. It’s the editor that provides keybindings and UI, but it’s extension that populates this specific UI. Code Action here is just an example (which I picked because I’ve seen many people stumbling over them in ST). There’s a number of other things which are first-class in VS Code, but which I don’t see mentioned in ST LSP docs (outline, breadcrumbs, semantic highlighting, selection ranges, folding ranges).
              1. 0

                Meta note: I don’t find the “spammed” wording helpful. On HN, I replied to a direct request for questions from ST developer. If I saw the HN thread first, I wouldn’t have made a lobster comment. I put time to condense my relevant experience (like this bit of feedback) into a paragraph, it doesn’t feel great to see it dismissed as spam.

                If you don’t want to have your posts called spam, don’t copy and paste your posts between multiple forums, particularly when it’s misinformed and ended up dominating top level discussions in both places with people who didn’t understand that the assumptions underlying your post didn’t actually apply to LSP in ST4?

                I don’t know what to tell you. It’s spammy behaviour.

                1. 1

                  I am not going to continue this conversation, but, for transparency, here’s a link to HN discussion in question: https://news.ycombinator.com/item?id=27230406.

    13. 5

      I appreciate this run through. My continually relevant tweet from 6 years ago is relevant once again, https://twitter.com/losvedir/status/636034419359289344.

      I will say that one area that the array language influence “stuck” was with CSS. For a while I preferred one line class definitions, with no line breaks between related classes, eg:

      .foo{display: flex; border: 1px solid #ddd;}
      .foo-child{flex: 0 0 100; padding: 1rem;}
      

      But then that made me more receptive to tailwind style utility CSS, so that’s where I am now.

      But array languages are so cool, and I really wonder how much is syntactic (terseness as a virtue, all these wonderful little operators), and how much is semantic (working on arrays, lifting operators to work at many dimensions). What would a coffeescript like transpiler from more traditional syntax to, say kdb/q, be like?

      1. 6

        IME, the real magic of APL, and what the numerous APL-influenced array languages have consistently lost in translation, are the concatenative, compositional, functional operators that give rise to idiomatic APL. They have taken the common usecases, but forgone the general ones. For example, numpy provides cumsum as a common function, but APL & J provide a more general prefix scan operator which can be used with any function, no matter whether primitive or user-defined, giving rise to idioms like “running maximum” and “odd parity” to name just a couple. Likewise, numpy has inner but it only computes the ordinary “sum product” algorithm while APL & J have the matrix product operator that affords the programmer the ability to easily define all sorts of unusual matrix algorithms that follow the same inner pattern.

        This is not even to mention the fantastic sorts of other operators, like the recursive power of verb or the sort-of-monadic under that AFAICT have no near equivalent in numpy.

        1. 1

          Is there a simple way for other languages to replicate the success, or do the designers just need to be brilliant?

          1. 7

            I doubt brilliance has much to do with it. It’s likely more about exposure to the concepts coinciding with the motivation required to model them in a language or library. Especially in a way that’s accessible to people who don’t have previous exposure. Learning the concepts thoroughly enough to make it simple, and doing the work required to create an artifact people can use and understand is really difficult.

            You see similar compositional surprises when looking at some of the Category Theory and Abstract Algebra inspired Haskell concepts. I imagine the current wave of “mainstream” interest in Category Theory will result in these ideas seeping into more common usage, and exposed in ways that don’t require all the mathematical rigor.

            It’s important to realize that APL-isms are beautiful, but they are especially striking to people because it’s new to them. Set theory, the lambda calculus, and relational algebra are just some things that have similarly inspired in the past (and continue to do so!) that have spread into common programming to the extent that casual users don’t realize they came from formalized branches of mathematics. In my opinion this is a good thing!

            Another exciting thing happening right now is the re-discovery of Forth. It has similar compositional flexibility, but goes about things in a very different way that corresponds to Combinatory logic. I would expect some people are going to reject the Abstract Algebra/Category Theory things as “too far removed from the hardware”, but be jealous of the compositional elegance. This will result in some very excited experimentation with combinatory logic working directly on a stack. Not that this hasn’t been happening in the compiler world with stack machines for decades…but it’s when non-specialists get ahold of things that innovation happens and things get interesting.

    14. 27

      I’m keeping both eyes on the development and evolution of Zig. As I become older — I’m a couple years shy of 40 — what I value in a programming language, and in my tools in general, is shifting. As a young 20 year old, I valued expressivity above all else: languages like Common Lisp, Smalltalk, and Ruby were what I craved. In my thirties, I’ve valued safety and performance and languages like OCaml and Rust have been great tools to use.

      I recently became a step-father and a dog owner and I have a lot less free time to keep up to date with the newest developments in Rust-land. (I haven’t even tried doing an async program yet!) As a result, I find that I now value simplicity a lot more. I’m looking for tools that I can learn quickly, keep how they work in my head without too much trouble, and get a lot of bang for the buck. Zig fits that description and I’m finding myself more in agreement with their community’s core values than Rust’s. Not sure if I’ll ever be a Zig developer, but my eyes are open and so is my mind.

      1. 11

        I think one of the amazing things about zig is that the language is tight and guessably consistent:. If I don’t know how to do something I can usually guess; and if I ask usually it’s like “oh yeah of course thats how you would do it”. That’s very powerful, difficult to quantify, and more relevant than you would guess.

      2. 7

        I recently became a father and I’m getting up there in years too (still a few from 40 though), and I’m not having any problems “keeping up” with Rust personally. And because my time is so limited, I really appreciate how Rust is able to catch a lot of mistakes that I might otherwise make while I’m coding in a sleep deprived state. It gives me confidence and lets me move more quickly because the compiler helps relieve a lot of my mental burden.

        Zig might do that too. I don’t know, haven’t tried it anger yet.

        (For me personally, I don’t think “age” really has much to do with anything here, but I’ve framed it this way because that’s what you did, and I think counter-experiences are valuable.)

        1. 2

          I recently became a father

          Congrats! Me, too, back in January. Hope you’re figuring out the whole work life balance thing. That’s been a real challenge for me and I’ve had to cut back on side projects (and trying out Zig as much as I’d like). But I’m hopeful once this creature is a little more self sufficient I’ll have some time. I also recently worked out a 4 day workweek, so might get some nap time to tool around, too.

          I know you have a ton of crates you maintain and post incredibly long and detailed and informative and useful comments and posts, but I’m hoping you let those fall by the wayside if need be!

          1. 2

            Aye thanks. :-) Yes, I have a lot less time than I used to. My little guy was born in October. But yeah, I am also hoping my time will free up a bit more once he gets older. Right now it’s pretty intense. I still find a little time for coding in the evenings. Everything takes a lot longer!

            And congrats as well! Good luck!

    15. 3

      I remember being confused and amazed the first time I came across Rspec years ago. After some years of using it as a black box I finally sat down to try to imagine how the nifty syntax even worked, and came up with this little gist.

      That said, now that I’ve moved onto Elixir, I actually much prefer the ExUnit syntax:

      describe "my_func/4" do
        test "it works as expected" do
          assert foo == blah
        end
      end
      

      It’s more minimal. When I use the expect(foo).toBe(...) syntax, I feel like I’m constantly having to look up the different matchers and what they mean. A simple matching assert is much clearer to me. It’s a bit of friction every time I have to work on our frontend jest tests.

      That said, I got my start with RSpec and it taught me to think in terms of “expectations”, and how it’s even possible to write tests before the code, which I still do from time to time. So it’s a testament to how much RSpec changed programming that what was revolutionary at the time and needed these syntactical guidelines is now taken for granted and just feels clunky to me.

      1. 4

        Matchers are not just syntactic sugar; there’s another reason they exist.

        Imagine writing a testing framework in Ruby where you can write assertions the way you suggest:

        assert foo == blah
        

        How do you get Ruby to print the values of foo and blah if the assertion fails?

        You can do it, by inspecting the AST or the bytecode of the calling method at runtime, but it’s really kludgy to do in Ruby, and the interpreter must withhold optimizations for it to work (e.g. foo and blah must be stored somewhere in the stack frame, not held in registers and then optimized away since they are no longer used). That’s why in Test::Unit (or minitest) we instead write:

        assert_equal blah, foo
        

        But now you have a new problem: any new type of comparison you might want to do needs a new assertion method, so we end up with assert_equal, assert_not_equal, assert_instance_of, assert_match, assert_same, and so on. And they all end up looking something like this:

        def assert_equal(expected, actual)
          msg = build_message(expected, actual) {
            "Expected #{expected} to equal #{actual}"
          }
          assert_block(msg) { expected == actual }
        end
        

        What matchers do is take away some of the boilerplate involved in writing a new assertion. So instead of the above, we could eliminate some of the duplication like this:

        def assertx(actual, matcher, *expected)
          msg = build_message(expected, actual) {
            "Expected #{actual} to #{matcher} #{expected}"
          }
          assert_block(msg) { @matchers[matcher].call(actual, *expected) }
        end
        
        def define_matcher(name, &block)
          @matchers[name] = block
        end
        
        define_matcher :equal { |expected, actual| expected == actual }
        

        then use it like this:

        assertx foo, :equal, blah
        

        Once you have that, it’s a few more steps to flip it around and make it read like Engilsh, which is how we end up with expect:

        expect(foo).to.equal(blah)
        
        1. 4

          Oooh, good point. I didn’t think of that. In Elixir assert is a macro, which I see now is how we get all the developer niceties of showing what was expected and given and all that. That plus asserting against a pattern match to select out map fields and the like makes it real slick and easy feeling. But I don’t think I had considered how much the underlying language contributes to what sort of testing is possible. Thanks, this was interesting!

    16. 4

      Love to see it. I use Elixir everyday at work, and am a big fan of static types, so in principle I’m the target audience. However, I’m reluctant to give up the Elixir ecosystem I love do much. E.g. I’m really excited LiveView, Livebook, Nx, etc.

      What are the benefits / necessity of a whole new language as opposed to an improved dialyzer, or maybe a TypeScript style Elixir superset with inline type annotations?

      1. 11

        One issue is that existing Elixir code will be hard to adapt to a sound type system, more specifically in how pattern matching is used. For example, consider this common idiom:

        {:ok, any} = my_function()
        

        (where my_function may return {:ok, any} or {:error, error} depending on whether the function succeeded)

        Implicitly, this means “crash, via a badmatch error, if we didn’t get the expected result”. However this is basically incompatible with a sound type system as the left-hand side of the assignment has the type {:ok, T} and the function has the return type {:ok, T} | {:error, Error}.

        Of course we could add some kind of annotation that says “I mismatched the types on purpose”, but then we’d have to sprinkle these all over existing code.

        This is also the reason why Dialyzer is based on success typing rather than more “traditional” type checking. A consequence of this is that Dialyzer, by design, doesn’t catch all potential type errors; as long as one code path can be shown to be successful Dialyzer is happy, which reflects how Erlang / Elixir code is written.

        1. 5

          Of course we could add some kind of annotation that says “I mismatched the types on purpose”, but then we’d have to sprinkle these all over existing code.

          This is what Gleam does. Pattern matching is to be total unless the assert keyword is used instead of let.

          assert Ok(result) = do_something()
          

          It’s considered best practice to use assert only in tests and in prototypes

          1. 1

            What do you do when the use case is “no, really, I don’t care, have the supervisor retry because I can’t be bothered to handle the error and selectively reconcile all of this state I’ve built up, I’d rather just refetch it”?

            1. 1

              Maybe we need another keyword:

              assume Ok(result) = do_something()
              
              1. 1
            2. 1

              That’s what assert is. If the pattern doesn’t match then it crashes the process.

              1. 1

                So why not use it in production?

                1. 2

                  You can for sure. I was a bit too simple there. I should have said “It is best to only use assert with expected non-exceptional errors in prototypes`. There’s place for Erlang style supervision in Gleam in production.

        2. 2

          This is an interesting example. I’m still not sure I understand how it’s “basically incompatible”, though. Is it not possible to annotate the function with the possibility that it raises the MatchError? It feels kind of like Java’s unchecked exceptions a bit. Java doesn’t have the greatest type system, but it has a type system. I would think you could kind of have a type system here that works with Elixir’s semantics by bubbling certain kinds of errors.

          Are you assuming Hindley Milner type inference or something? Like, what if the system were rust-style and required type specifications at the function level. This is how Elixir developers tend to operate already, anyway, with dialyzer.

        3. 1

          I don’t see how that’s a problem offhand. I’m not sure how gleam does it, but you can show that the pattern accommodates a subtype of the union and fail when it doesn’t match the :ok.

          1. 4

            The problem is the distinction between failing (type checking) and crashing (at runtime). The erlang pattern described here is designed to crash if it encounters an error, which would require that type checking passes. But type checking would never pass since my_function() has other return cases and the pattern match is (intentionally) not exhaustive.

            1. 1

              Ah, exhaustive pattern matching makes more sense. But also feels a little odd in Erlang. I’ll have to play with Gleam some and get an understanding of how it works out.

      2. 4

        One thing is that TypeScript is currently bursting at the seams as developers aspirationally use it as a pure functional statically-typed dependently-typed language. The TypeScript developers are bound by their promise not to change JavaScript semantics, even in seemingly minor ways (and I understand why this is so), but it really holds back TS from becoming what many users hope for it to be. There’s clearly demand for something more, and eventually a language like PureScript / Grain / etc will carve out a sizable niche.

        So, I think starting over from scratch with a new language can be advantageous, as long as you have sufficient interoperability with the existing ecosystem.

      3. 2

        I won’t go too much into Dialyzer as I’ve never found it reliable or fast enough to be useful in development, so I don’t think I’m in a great place to make comparisons. For me a type system is a writing assistant tool first and foremost, so developer UX is the name of the game.

        I think the TypeScript question is a really good one! There’s a few aspects to this.

        Gradual typing (TypeScript style) offers different guarentees to the HM typing of Gleam. Gleam’s type system is sound by default, while with gradual typing you opt-in to safety by providing annotations which the checker can then verify. In practice this ends up being quite a different developer experience, the gradual typer requires more programmer input and the will to resist temptation not to leave sections of the codebase untyped. The benefit here is that it is easier to apply gradual types to an already existing codebase, but that’s not any advantage to me- I want the fresh developer experience that is more to my tastes and easier for me to work with.

        Another aspect is just that it’s incredibly hard to do gradual typing well. TypeScript is a marvel, but I can think of many similar projects that have failed. In the BEAM world alone I can think of 4 attempts to add a type checker to the existing Elixir or Erlang languages, and all have failed. Two of these projects were from Facebook and from the Elixir core team, so it’s not like they were short on expertise either.

        Lastly, a new language is an oppotunity to try and improve on Elixir and Erlang. There’s lots of little things in Gleam that I personally am very fond of which are not possible in them.

        One silly small example is that we don’t need a special .() to call an anonymous function like Elixir does.

        let f = fn() { 1 }
        f()
        

        And we can pipe into any position

        1
        |> first_position(2)
        |> second_position(1, _)
        |> curried_last_position
        

        And we have type safe labelled arguments, without any runtime cost. No keyword lists here

        replace(each: ",", with: " ", in: "A,B,C")
        

        Thanks for the questions

        edit: Oh! And RE the existing ecosystem, you can use Gleam and Elixir or Erlang together! That’s certainly something Gleam has been built around.

        1. 1

          Two of these projects were from Facebook and from the Elixir core team, so it’s not like they were short on expertise either.

          Oh, wow, I don’t think I’ve heard of these! Do you have any more info? And why was Facebook writing a typechecker for Elixir? Are you talking about Flow?

          1. 1

            Facebook were writing a type checker for Erlang for use in WhatsApp. It’s not flow, but it is inspired by it. I’m afraid I don’t think much info is public about it.

    17. 5

      Interesting post. I kind of disagree that

      This component should be stateless.

      is worse than their proposed fix of

      Since this component doesn’t have any lifecycle methods or state, it could be made a stateless functional component. This will improve performance and readability. Here is some documentation.

      This just rubs me the wrong way. The revised comment doesn’t actually ask the person to do anything. And if you’re not going to ask them to do anything, then why leave a comment at all? I think it’s important to actually make it apparent that this is something you’re flagging and would like to see fixed in order to approve their PR. Not to mention, doesn’t this just make “This will improve performance and readability” the part that’s passing off an opinion as fact?

      “Could” is such a meaningless, wishy-washy word here. Of course it could - are you telling me you think it’s better? Are you just throwing out a random option with some documentation you want me to read, while not being totally invested anyway, and you want me to defend what I’ve done?

      It feels a bit condescending if anything, like “oh, I guess I just wasn’t informed enough” or else I would have known about this obviously better approach.

      But anyway, if the author here takes issue with the idea of stating opinions as facts, then perhaps a better way is a simple:

      Please make this component stateless.

      At other times, it’s appropriate to not make a direct request, perhaps when you’re somewhat junior or you’re not positive yourself and just brainstorming a suggestion. But then, contrary to the article’s version, you should make it explicit that it’s a suggestion and you’re soliciting feedback:

      What do you think about making this component stateless? Often times that improves performance and readability, though I’m not positive that’s the case here.

      I like to think of it in terms of “emotional labor”. Basically, make it easy on the author to respond to your PR, and don’t waste their time. If you feel strongly that your way is better, make the explicit request and the person can simply implement that change without thinking about it too much, even if they’re personally on the fence about whether it’s actually better. If you’re really not sure, leave it a question or give them an easy out, so the person can choose to not make your change quickly. But don’t leave it ambiguous about how invested you are in the comment. “This could have been done this way” is not super useful, less so with a bunch of documentation to read. Now the author is in the position of trying to infer how invested you are in the comment, whether you actually are asking for a change, or asking a question about their approach vs yours, and whether you should try to defend yours or what. There’s rarely an unequivocally “best” approach, and so an informational request-less comment invites the person to try to re-litigate for themselves the pros and cons of two different approaches they likely already considered.

    18. 21

      This is a neat backstory, glad to see more “behind the scenes” rust development.

      To me, the person I feel doesn’t get enough credit (though he does get a lot) is Niko Matsakis. As I understand the progression of rust, it started as a higher level, green threaded ML variant of sorts, and ended as this low level systems programming language we know today. But the key thing that defines rust, I think, is the borrow checker ownership model, which I think is thanks mostly to Niko. So while Graydon gets the credit for creating rust, I almost feel that was a different language, and the true “father” of rust as we know it is Niko.

      And then I get wondering what it would have been like had the language been designed around the borrow checker from the start, or if that had been bolted onto a different language. I wonder if a “C with borrowck” is possible and what that looks like. I personally love rust’s ML heritage and traits and iterators and RAII but I think it maybe turns off some hardcore low level and embedded developers, and they more than anyone are who we need to give memory safety to.

      1. 19

        What is the key thing that defines Rust?

        Borrow checker is one candidate, but that’s an implementation. I think the key thing that defines Rust is its value. Rust’s value is Graydon’s contribution. Yes, Rust had an extremely different implementation, but it always had the same value. At least from the first public release to 1.0.

        The current website says “Rust is a language empowering everyone to build reliable and efficient software”, but that’s post-1.0 change. (I actually consider this the most significant post-1.0 change. I think it was almost a coup.)

        The previous website says “Rust is a systems programming language that runs blazingly fast, prevents segfaults, and guarantees thread safety”. That’s it. That’s Graydon’s contribution. It implies, Rust is not a simple language. Rust is not a language that is easy to learn. Rust is not a language that is fast to compile. To achieve “fast, memory safe, thread safe”, Graydon was ready to trade off everything else.

        To see the value is what defines Rust, consider a counterfactual: what is a simple language that is easy to learn and fast to compile? It is Go. The value is what differentiates Rust and Go, not particular implementation choices.

        1. 6

          A useful historic link about language values would be the first slide deck on Rust: http://venge.net/graydon/talks/intro-talk-2.pdf

          1. 1

            According to these slides, initial Rust was a compiled and statically typed Erlang with C-style syntax and OCaml-style semantics :) I was really excited by that approach, but if I understood correctly, it was incompatible with fast calling to C (because of the GC and the growable stacks required for lightweight threads). Then seamless integration with C was prioritized, and as a consequence the GC and the lightweight threads had to be removed, amd without a GC the language needed another mechanism for automatic memory management which led to the borrow checker. Today’s Rust is very different from what was originally envisioned.

            1. 2

              This is… not the whole story. Because Rust borrow checker preceded both removal of GC and green threads. In fact one of the hardest problem faced by design of Rust borrow checker was that it must work with GC. This is why Rust borrow checker is “extensible”, for example working fine with reference counted pointer implemented in the library.

              1. 1

                Thanks for following-up on this. I didn’t know and that’s very interesting. What was the purpose of the borrow checked when there is a GC? For non-memory resources like file handles, etc.?

                1. 2

                  Thread safety

      2. 7

        On traits and iterators: hypothetical memory safe C would insert bound checks like everyone else including Rust. The primary motivation behind Rust’s iterators is bound check elision, not syntax sugar. The primary motivation behind Rust’s traits is to support Rust’s iterators. Memory safe C without traits and iterators would be, say, 10% slower than Rust, or have lots of unsafe indexing.

        I agree about RAII. Zig-style defer would work too. (The difference is that defer is not bound to type.)

      3. 2

        I almost feel that was a different language, and the true “father” of rust as we know it is Niko.

        would like to read the blog post version of this.

        1. 8

          I am aware it is almost unintelligible today without context, but Niko’s two posts in 2012 are “at the moment” record of this defining point in Rust history.

          Imagine never hearing the phrase aliasable, mutable again (November 2012) is about semantics of borrowing, and Lifetime notation (December 2012) is about syntax of borrowing. Note: none of eight(!) options discussed in syntax post is current syntax, although option 6 is close.

      4. 2

        I personally love rust’s ML heritage and traits and iterators and RAII but I think it maybe turns off some hardcore low level and embedded developers, and they more than anyone are who we need to give memory safety to.

        Tbh I’m kind of glad that it remains, and I’d be less enthusiastic about Rust if it wasn’t! I also think it’s really nice to bring these ideas to more systems programmers, who may have never been exposed to ML-style languages. It also makes it easier for languages that come after Rust to bring even more influences from ML into the mainstream (say, module systems for example).

      5. 2

        I wonder if a “C with borrowck” is possible and what that looks like.

        Cyclone was a research “safe C” language that might be of interest. Its region analysis has been cited as a predecessor/influence on the borrow checker, from my understanding.

    19. 15

      There should be formal semantics for the borrow checker.

      Rust’s module system seems overly complex for the benefit it provides.

      Stop releasing every six weeks. Feels like a treadmill.

      The operator overload for assignment requires generating a mutable reference, which makes some useful assignment scenarios difficult or impossible…not that I have a better suggestion.

      A lot of things should be in the standard library and not separate crates.

      Some of the “standard” crates are more difficult to use than they should be. I still can’t figure out how to embed an implements-the-RNG-trait in a struct.

      Async is a giant tar pit. The immense complexity it adds doesn’t seem to be worth it, IMHO.

      Add varargs and default argument values.

      1. 12

        I genuinely do not understand what people find complex about the module system. It’s literally just “we have a tree of namespaces”.

        1. 8

          “We have a tree of namespaces. Depending on how you declare it the namespace names a file or it doesn’t. Namespaces nest, but you need to be explicit about importing from outer namespaces. Also, there’s crates which are another level of namespacing with workspaces.”

          Versus something like Python: There is one namespace per file.

          (Python does let you write custom importers and such but that’s truly deep magic that is extremely rarely used.)

          I’m not saying there aren’t benefits to the way Rust does it. I’m saying I don’t feel like the juice is worth the squeeze.

          EDIT: @kornel said it better: https://lobste.rs/s/j7zv69/if_you_could_re_design_rust_from_scratch#c_3hsii6

          1. 5

            I mean … it feels like this takes 30 minutes to understand. Maybe the problem is the docs? Or maybe it just already mapped to my pre-existing mental model of namespaces, but this was one of the least surprising parts of rust to me

            Depending on how you declare it the namespace names a file or it doesn’t.

            New file means a new namespace (module), new namespace (module) doesn’t mean a new file.

            1. 4

              I mean … it feels like this takes 30 minutes to understand. Maybe the problem is the docs? Or maybe it just already mapped to my pre-existing mental model of namespaces, but this was one of the least surprising parts of rust to me

              It was the opposite for me, for whatever reason; it feels like there’s active friction between my mental model of namespaces and the way Rust does it. It’s weird.

              You know, I kinda got the same mental friction feeling with namespaces in Tcl. I couldn’t tell you why. Maybe I just hate nested namespaces…

            2. 2

              I’ve over and over and over again heard from beginners that the docs do a notably bad job communicating how it works, in particular those that are the easiest to get your hands on as a beginner (the rust book and by example). They deal almost exclusively with submodules within a file (i.e. mod {}), since it’s difficult to denote multiple interrelated files in the text, playground example, text, playground example idiom they decided to use.

              When they briefly do try to explain how the external file / directory thing works they say something like “you used to need a file named mod.rs in another directory but now in Rust 2018 you can just make a file named (the name of the module).rs” which is a really poor explanation of how that works and is also literally incorrect. Like, you can go without mod.rs but if you want to arrange your code into a directory structure you still need mod.rs. There have been issues on the Github for the rust book about making the explanation coherent (or more trivially making it actually true) but the writers couldn’t comprehend that it isn’t immediately intuitive to beginners and have refused to make very basic changes like having it just say something like “when you write mod foo, the compiler looks in the current directory for either foo.rs or foo/mod.rs”. A lot of the problem here is the mod.rs -> modname.rs addition. It’s an intuitive QOL improvement to people already familiar with the modules system but starting from no understanding of the modules system it makes it infinitely more difficult for newbies to understand.

          2. 5

            Hmm, I feel like the following set of statements covers the way the module system works:

            • We have a tree of namespaces, which is called a crate
            • Declaring a module…
              • …with just a name refers to a file in a defined location relative to the one containing the declaration
              • …with a set of curly braces refers to the content of those curly braces
            • You have to explicitly import anything from outside the current module (file or mod {} block)

            In practice, modules are almost always declared in separate files except for test modules, so it ends up being “there is one namespace per file” most of the time anyway.

            I don’t really see what about that is all that complicated.

        2. 6

          As someone who just dabbles with rust, it still confuses me. I know I’d get it if I used it more consistently, but for whatever reason it just isn’t intuitive to me.

          For me, I think the largest problem is that it’s kind of the worst of both worlds of being neither an entirely syntactic construct nor being filesystem based. Rather, it requires both annotating files in certain ways and places, and also putting them in certain places in the file system.

          By contrast, Python and Javascript lean more heavily on the filesystem. You put code here and you just import it by specifying the relative file path there.

          On the other end of the spectrum you have Elixir, where it doesn’t matter where you put your files. You configure your project to look in “lib”, and it will recursively load up any file ending in .ex, read the names of the modules defined in there, and determine the dependency graph among them. As a developer I pop open a new text file anywhere in my project, type defmodule Foo, and know that any other module anywhere can simply, e.g., import Foo. For my money, Elixir has the most intuitive system out there.

          Bringing it back to rust, it’s like, if I have to put these files specifically right here, why do I need any further annotation in my code to use those modules? I know they’re there, the compiler knows they’re there, shouldn’t that be enough? Or conversely, if I’m naming this module, then why do I have to put it anywhere in particular? Shouldn’t the compiler know it by name, and then shouldn’t I be able to use it anywhere?

          I’m also not too familiar with C or C++ which is what it seems to be based on. I get that there’s this ambient sense of compilation units, and using a module is almost like a fancy macro that text substitutes this other file into this one, but that’s not really my mental model of how compilation has to work.

          1. 1

            Hey, thanks, this is some interesting food for thought!


            I’m also not too familiar with C or C++ which is what it seems to be based on.

            I think they’re actually based on ML modules. They’re not really similar to C/C++… I’d actually describe it as more similar to python than C/C++ (but somewhere in the middle between them).

            and using a module is almost like a fancy macro that text substitutes this other file into this one,

            I think the mod module_name; syntax is actually exactly a fancy macro that does the equivalent of text substitution (up to error messages and line numbers). Of course it substitutes into the mod module_name { module_src }` form so module_src is still wrapped in a module.

        3. 8

          Rust’s module model conceptually is very simple. The problem is that it’s different from what other languages do, and the difference is subtle, so it just surprises new users that it doesn’t work the way they imagine it would.

          Being different, but not significantly better, makes it hard to justify learning yet another solution.

        4. 2

          Do i need to declare my new mod in main.rs or in lib.rs? What about tests? Why am I being warned about unused code here, when I use it? Why can I import this thing here but not elsewhere?

          I think the way all the explicit declaration stuff is really un-nerving coming from Python’s “if there’s a file there you can import it” strategy. Though I’m more comfortable with it now, I still wouldn’t be confident about answering questions about its rules

      2. 9

        What benefit is there to releasing less often?

        1. 11

          Another user on here (forgive me, I can’t remember who) said it well: if I cut my pizza into 12 slices or 36 slices, it’s the same amount of pizza but one takes more effort to eat.

          Every six weeks I have to read release notes, decide if what’s changed matters to me, if what counts as “idiomatic” is different now, etc. 90% of the changes will be inconsequential, but I still gotta check.

          Bigger, less frequent releases gives me the changes in a more digestible form.

          Note that this is purely a matter of opinion: obviously a lot of people like the more frequent releases, but the frequent release schedule is a common complaint from more than just me.

          1. 3

            This would be purely aesthetic, but would bundling release notes together and publishing those every 2 or 3 releases help?

            1. 8

              Rust tried to do it with the “Edition Guide” for 2018 which — confusingly — was not actually describing features exclusive to the new 2018 parsing mode, but was a summary of the previous couple of years of small Rust releases.

              The big edition guide freaked some people out, because it gave impression that Rust suddenly has changed a lot of things, and there were two different Rusts now. I think Rust is damned here no matter what it does.

      3. 2

        Not sure the issue you’ve hit with embedding something that implements the Rng trait in a struct. Here’s an example that does just that without issue.

        1. 1

          Replying again just for future reference.

          I don’t remember exactly what I was doing but I ended up running into this:

          for a trait to be "object safe" it needs to allow building a vtable to allow the call to be resolvable dynamically; for more information visit <https://doc.rust-lang.org/reference/items/traits.html#object-safety>
          

          Point is, I got to that point trying to have an Rng in a struct and gave up. :)

          My solution was to put it in a Box, but that didn’t work for one of the Rng traits (whichever one includes the seed functions), which is what I wanted.

          Either way, I obviously need to do more research. Thanks.

        2. 1

          Thank you, I appreciate that. My problem boils down to not knowing when to use Box and when to use Cell, apparently.

          1. 3

            Box is an owned pointer, despite being featured so prominently it doesn’t have many uses. It’s basically good for

            • Making unsized things (typically trait objects) sized
            • Making recursive structs (otherwise they’re infinite sized)
            • Efficiency (moving big values off of the stack)
            • C ffi
            • (Probably a few things I forgot, but the above should be the common cases)

            RefCell is a single threaded rw-lock, except it panics where a lock would block because blocking on a single threaded lock would always be a deadlock. It’s purpose in life is to move the borrow checkers uniqueness checks from compile time to runtime.

            In this case, you don’t really need either. We can just modify the example so that make takes a mutable reference, and get rid of the Refcell. See here: https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=6f64a7192a1680181200bf577c285b9d

            1. 2

              Yup, I used RefCell here because I don’t think the changing internal state of the random number generator is relevant to the users of the CharacterMaker, so I preferred make to be callable without a mutable reference, but that’s an API design choice.

    20. 5

      The caption on the photo with your fist was excellent. Never use this in public!

      1. 2

        Yeah, I laughed out loud when I saw that. The whole post was quite amusing, though. Loved the writing style.