Threads for ondreian

    1. 1

      I flagged this as spam, it is just marketing fluff.

    2. 6

      Use to be npm hell and yarn was a balm on that horrid experience, yarn 3 was abysmal though, and outside of work I have stopped using any of these tools in favor of Bun. None of the other package managers in the Node.js ecosystem come close to behaving sanely, and with Bun I don’t have to spend hours finding what webpack/whatever skeleton config I need to make stuff that should Just Work, actually work. Maybe Yarn 4 will walk back some of the mistakes they made with design, but this time I’m going to wait and see.

      1. 3

        So I’m just a driveby-JS-coder at best, but this comment confused me. I’m using npm/yarn/whatever to get packages with the right version and put them in the right folder. What does “behaving sanely” here mean? What does yarn have to do with webpack config? (if you mean specifying the right import paths, doesn’t that apply to all js package managers the same?)

        1. 1

          I’m using npm/yarn/whatever to get packages with the right version and put them in the right folder.

          Are you sure they are the right version? What happens if you checkout a contributor’s work and do npm start or yarn start? Do either verify the contents of your node_modules directory against the new lockfile if it was changed? I currently solved this by requiring a yarn install in a githook on pulls, but this feels like basic functionality for a package manager.

          the --frozen-lockfile functionality of Yarn 3 is fundamentally wrong behavior, it still tries to modify the lockfile, it just now exits the process with a non-zero exit code instead of constructing the node_module directory exactly as the lockfile outlays.

          There are dozens of footguns here, there are just two that I have dealt with in the past year that I feel like writing about off the top of my head.

          When your package manager has exotic, unreliable behavior, it makes it quite a lot harder to write reliable software on top, but your mileage may vary.

          My diatribe about webpack config is more just a comment on the state of Node.js land, almost nothing works out of the box for long if you are updating your dependencies as you should, whether that is your build configurations (webpack) or your package manager (yarn)

          1. 2

            What language package manager does an install/verify before starting a script? Perl/Ruby/Python/JS package managers will just invoke the script as you say. I think Go and Gradle always ensure up to date dependencies before run - except in both cases it’s much easier to end up running stale / outdated artifacts by accident; compiled languages seem to trade “stale dependency” for “stale user code”.

            1. 2

              I think cargo would be one example.

    3. 2

      Nice to see you still out in the wild writing code and working to solve problems in our ecosystem @tipiirai

      1. 2

        Thanks! This warms my heart

    4. 7

      I really wish the class and private stuff was never added to JavaScript. Good to see the simple closure is the smallest.

      1. 1

        More bolted on sigils that have no real value, what’s not to love!?

    5. 4

      i also verified this one myself by checking the bucket (kick-files-prod) contents using the aws cli, and have started archiving as much of the bucket as i can (at the time of writing that is around 50+gb of mostly user generated content). a quick check verifies that at the very least the bucket does not allow for public write or delete access; publicly allowing read access is still pretty bad nevertheless.

      …What? That’s just as sketchy as kick.com letting you do that, let alone you have no actual idea of what you are downloading and could end up having something really bad on your computer.

      1. 8

        they literally don’t care about that - scrolling down / wikipedia can verify this

      2. 3

        Do you mean bad cybersecurity wise?

        I’d expect maia to do their pentesting work from a laptop that doesn’t have her private secrets on it. Just reimage if you’re feeling sus.

        1. 6

          It’s more about things that someone uploaded there, which may be illegal on any device.

    6. 9

      When I was getting started with TypeScript (and node.js) a year ago, I found it a pretty delightful developer experience except for modules. Especially since I needed my code to work on both front and back end. To this day I don’t like to touch those magic declarations in .tsconfig files — I got them to work somehow, but I don’t know exactly what they do, and looking at them makes my stomach hurt.

      1. 3

        This really is the worst part of the JS ecosystem right now, hands down.

      2. 2

        yeah, I think instead of the config hell that is so common in Node.js land, we had sane defaults things would be so much nicer.

        Between babel, webpack, and tsconfig it’s a maddening awful DevEx.

    7. 15

      There are many criticisms I would level at GHA, but not discarding the entire build environment between runs is not one of them. Preserving random bits of the build user’s home directory between runs (“dependency caching”) may speed up some CI jobs, but it will also mean you don’t catch certain errors and your job output necessarily depends on what jobs have run before in some way. It’s not a safe default everybody can just switch on without understanding what it means.

      It’s also a feat of mental gymnastics to suggest that no timeout is a poor default for new jobs, and then suggest that the way to come up with a good timeout is to use the average runtime of past runs of the job you’re creating!

      1. 2

        I think the right way of doing dependency caching is with a container. This guarantees a base environment that has only the dependencies that you expect. For GitHub, you can use the same container as the base layer of a dev container (built tools in the CI container, any additional user-facing tooling in the dev container), so your contributors can have precisely that environment for Code Spaces.

      2. 2

        Most languages have become good at not giving access to dependencies that you didn’t explicitly install.

        You are right that dependency caching can lead to bugs and if that’s a bigger concern than developer velocity it is best to not enable it.

        No default timeouts is indeed wrong. A few badly running jobs consumed all my minutes! And that’s how I learned about it.

      3. 1

        i agree while heartedly about dependency caching, everytime I’ve added it to a CI service that wasn’t Elixir+hex I’ve regretted it.

    8. 2

      More proof that npm is trashware we shouldn’t be using. It is incredibly unfortunately it still has this stranglehold on the JavaScript ecosystem.

    9. 13

      The list of issues present in this article boils down to “I don’t like their markdown spec” and “they’re closed source”, which I think are incredibly weak arguments.

      GitHub is a proprietary, trade-secret system that is not Free and Open Source Software (FOSS).

      (from the Software Freedom Conservancy post that was linked:)

      In the central irony, GitHub succeeded where SourceForge failed: they have convinced us to promote and even aid in the creation of a proprietary system that exploits FOSS.

      What is “exploiting FOSS”? How is GitHub exploiting FOSS by being closed source? None of the Freedoms granted by free software are prevented when using GitHub’s platform. I don’t understand why every commercial usage of free software is demonized to hell and back. Free software would be far less useful and prominent if it wasn’t for its ability to be used by businesses. Encouraging businesses to use (and lock-in to) FOSS is a better choice than completely segmenting the ecosystems. The proliferation of FOSS depends on commercial usage.

      1. 10

        i would imagine they are talking about GitHub Co-pilot and how they trained it on FOSS without respect for licenses, which absolutely is exploitive

        1. 4

          All of my open source work is released under a BSD license, by my choice. How is it “exploitative” for GitHub to do something I’ve explicitly granted them permission to do? Even before I’d agreed to their terms of service, even before GitHub existed, I was perfectly content to offer code to the world under a permissive license and was already doing so.

          1. 5

            For BSD-licensed code, it’s fine. But Copilot is also trained on GPL-licensed code, which forbids proprietary software being derived from it.

            1. 4

              First of all, it’s relevant to point out that not all of us use copyleft licenses, and many people instead use permissive licenses which explicitly say we’re OK with this. Too many Copilot discussions, and “corporate exploitation of open source” discussions in general, are framed as if the entire community is using copyleft licenses when it is not at all the case.

              And for Copilot versus the GPL, I expect nobody will ever bother trying to enforce, and if someone does it will simply be ruled in court that GitHub’s terms of service make it work. To lay it out as simply as possible, when putting code on GitHub you are making two license grants. One of those license grants is to the world at large via whatever you put in your LICENSE or license-indicator metadata, and the other license grant is specifically to GitHub, and is defined in GitHub’s terms of service. GitHub will almost certainly argue that the second license grant permits Copilot to do its thing, and if their legal team has done their jobs, the license grant in the terms of service will permit Copilot to do its thing.

              This leaves only the possibility of someone uploading code for which they do not have sufficient rights to make the GitHub-terms-of-service license grant, but the terms also require that person to indemnify GitHub, so the person who ends up in a lot of legal pain is still not GitHub – it’s the person who uploaded the code without being able to grant the associated license to GitHub.

        2. 3

          I think I would encourage the use of using NNs trained on Free software. It’s creatively transforming the work to provide creativity and automation to others. That is the same reason I would encourage others to transform and spread Free software. I don’t think that’s exploitative to the authors of the original software.

          As for other-licensed software, I don’t particularly care about their licenses, because I would prefer all software to be Free, so I don’t see GitHub “exploiting” other author’s work. It was clear in my test run of Copilot that it wasn’t copying some other author’s work verbatim, it was genuinely learning the patterns in my codebase. I don’t think patterns of code can or should be copyrighted anyway.

          1. 6

            For me, it’s not so much that GitHub’s efforts around Copilot and Codespaces are exploitative as they are one-way gates. They extract business value from open source software and lock it inside their ML models, managed cloud infrastructure, and proprietary APIs.

            Yes, there’s almost certainly more open source code in the world because of GitHub’s efforts, but a whole lot of it is useless without proprietary services you can only rent, not buy, and certainly not fork and hack on yourself.

            It’s a kinder, more self-aware Microsoft, but it is still Microsoft. Their playbook has always included being the foremost vendor of platforms and APIs that are “open” in the sense of being documented and accessible to developers, but not in the sense of having any choice of provider, terms, or implementation.

            1. 2

              They extract business value from open source software and lock it inside

              But that’s what businesses do. They put in work to add value to their inputs, and then sell the outputs at a premium. How is this fundamentally different from going to a field to pick blueberries, cooking them into jam, and then locking the jam away inside jars to sell at the market? (Except that in this case GitHub didn’t even take away the blueberries.)

              1. 3

                The correct analogy would be the owner of the field telling you that you can pick blueberries, but everything you make from them must also be freely accessible.

                1. 3

                  No, that would be the model of the original GPL. “I will give you this blueberry jam, but in exchange you must share the recipe whereby they were created.”

                  GitHub says, “cool, thanks, but no. I’m gonna take this jam and feed it into my jam-copier, then let people choose it as a flavor to mix in the jam I sell them at my farm stand next door to the community garden. The copier is my proprietary secret invention, though, so you can’t see inside and know how I extract and mix the flavors, and I’m not going to credit any of the cooks who contributed.”

                2. 1

                  Well, here we run into the usual issue where intellectual property is trivially copyable while blueberries aren’t.

                  The more interesting point specific to Copilot is whether the neural net is a derivative work of the GPL’d code it was trained on. I don’t think it is — the amount of transformation that’s gone on is too high. If I’m allowed to read some GPL code to study how it works and then write my own code that does something similar (i.e. there’s no “clean-room” requirement the way there is with reverse engineering), I think that also allows for what Copilot is doing. Not because Copilot is intelligent, but because both involve similar degrees of transformation.

        3. 1

          Too lazy to find the source, but there was a lawyer that stated like generative art, it would likely be better for freedom if all generated code like this were required to be licensed as 0BSD–the CC0 of software.

      2. 2

        “I don’t like their markdown spec” and “they’re closed source”, which I think are incredibly weak arguments.

        I’ll grant you the markdown spec bit, but the closed source argument is the single strongest argument possible in this space.

        How on earth did the so much of open source community come to depend on a proprietary, closed-source system run by an organisation that has historically been one of the most antagonistic towards open source software?

        1. 1

          Github offered a superior product for no to low cost.

          Their dominant position in Open Source development was built before they were acquired by Microsoft.

          Not everyone involved in developing open source software cares about the philosophical underpinnings of Free Software, or are even aware of the rift between FS and Open Source/permissive licensing. Anyone choosing an MIT license for example should have any problem hosting it on a platform that doesn’t publish the code they use.

          1. 2

            Github offered a superior product for no to low cost.

            … up-front.

            Don’t get me wrong, I was an enthusiastic user and promoter of Github for years. It was so much better to use than Sourceforge in so many ways. My “how on earth” was more of a rhetorical hand-wringing than a serious question; I know exactly how, because I was there.

            In hindsight, though, I believe it was a mistake.

        2. 1

          one of the most antagonistic towards open source software

          I don’t see this being applicable to GitHub when they have enabled the development of so much open source software. I would love for GH to be open source, and I’m glad that free alternatives exist, but I don’t view the open source-ness of the platform I’m using to be a deciding factor. Especially when it’s relatively simple to jump off of GH. Git is still decentralized here - nothing is tying you to GH aside from logs of issues / PRs (which are also able to be migrated). Your core software and audience isn’t directly tied to the platform.

          I empathize with the sentiment, but I’m not going to jump ship solely because they’re closed source. Otherwise, if I wanted to stay consistent, I wouldn’t use any software that was closed source. That would be a cool world to live in, but for now closed source software, in a lot of scenarios, is the best we have.

          1. 1

            but for now closed source software, in a lot of scenarios, is the best we have.

            Yes, that’s true, and I use a fair bit of it myself - from binary blobs on ARM systems to my Chromecasts attached to my TVs.

            But those scenarios do not include software forges :)

    10. 50

      I assume some people don’t like Facebook, so I reformatted the text and included it here:

      This is written by Jon “maddog” Hall

      This is the long-promised Christmas present to all those good little girls and
      boys who love GNU/Linux.
      
      It was November of 1993 when I received my first CD of what was advertised as "A
      complete Unix system with source code for 99 USD".   While I was dubious about
      this claim (since the USL vs BSDi lawsuit was in full swing) I said "What the
      heck" and sent away my 99 dollars, just to receive a thin booklet and a CD-ROM
      in the mail.   Since I did not have an Intel "PC" to run it on, all I could do
      was mount the CD on my MIPS/Ultrix workstation and read the man(1)ual pages.
      
      I was interested, but I put it away in my filing cabinet.
      
      About February of 1994 Kurt Reisler, Chair of the UNISIG of DECUS started
      sending emails (and copying me for some reason) about wanting to bring this
      person I had never heard about from FINLAND (of all places) to talk about a
      project that did not even run on Ultrix OR DEC/OSF1 to DECUS in New Orleans in
      May of 1994.
      
      After many emails and no luck in raising money for this trip I took mercy on
      Kurt and asked my management to fund the trip.   There is much more to this
      story, requiring me to also fund a stinking, weak, miserable Intel PC to run
      this project on, but that has been described elsewhere.
      
      Now I was at DECUS.  I had found Kurt trying to install this "project" on this
      stinking, weak, miserable Intel PC and not having much luck, when this nice
      young man with sandy brown hair, wire-rim glasses, wool socks and sandals came
      along.  In a lilting European accent, speaking perfect English he said "May I
      help you?" and ten minutes later GNU/Linux was running on that stinking, weak,
      miserable Intel PC.
      
      I sat down to use it, and was amazed. It was good. It was very, very good.
      
      I found out that later that day Linus (for of course it was Linus Torvalds) was
      going to give two talks that day.  One was "An Introduction to Linux" and the
      other was "Implementation Issues in Linux".
      
      Linus was very nervous about giving these talks.   This was the first time that
      he was giving a talk at a major conference (19,000 people attended that DECUS)
      to an English-speaking audience in English.   He kept feeling as if he was going
      to vomit.   I told him that he would be fine.
      
      He gave the talks.  Only forty people showed up to each one, but there was great
      applause.
      
      The rest of the story about steam driven river boats, strong alcoholic drinks
      named "Hurricanes", massive amounts of equipment and funding as well as
      engineering resources based only on good will and handshakes have been told
      before and in other places.
      
      Unfortunately the talks that Linus gave were lost.
      
      Until now.
      
      As I was cleaning my office I found some audio tapes made of Linus' talk, and
      which I purchased with my own money.  Now, to make your present, I had to buy a
      good audio tape playback machine and capture the audio in Audacity, then produce
      a digital copy of those tapes, which are listed here.  Unfortunately I do not
      have a copy of the slides, but I am not sure how many slides Linus had.  I do
      not think you will need them.
      
      Here is your Christmas present, from close to three decades ago.   Happy
      Linuxing" to all, no matter what your religion or creed.
      
      And if you can not hear the talks, you are probably using the wrong browser:
      

      Introduction to Linux:

      https://drive.google.com/file/d/1H64KSduYIqLAqnzT7Q4oNux4aB2-89VE/view?usp=sharing

      Implementation Issues with Linux:

      https://drive.google.com/file/d/1Y3EgT3bmUyfaeA_hKkv4KDwIBCjFo0DS/view?usp=sharing

      1. 28

        Thanks!

        Also I mirrored this on archive.org so people can find this after google no doubt caps the downloads.

        https://archive.org/details/199405-decusnew-orleans

      2. 13

        Thanks! I really appreciate you posting the text.

        It’s not so much that I don’t like Facebook, as that I literally cannot read things that are posted there, because it requires login and I don’t have an account. In my professional opinion as a privacy expert, neither should anyone else, but I realize that most people feel there isn’t really a choice.

        1. 3

          I don’t have a Facebook account either (and agree that neither should anyone else), but this post is actually publicly available so you should be able to read it without one. (I did, as I got to the post via the RSS feed, rather than the site so didn’t see the post.)

          1. 1

            That’s very interesting and good to know. I wonder whether it checks referrer or something? I do definitely get a hard login wall when I click it here.

            (Sorry for the delayed reply!)

      3. 11

        Someone also linked the slides in the archive.org link :)

        http://blu.org/meetings/1994/08/

      4. 3

        Does anyone have links to the referenced anecdotes “described elsewhere”?

      5. 3

        This format on Lobsters is really bad on mobile with the x-overflow, weird.

        1. 5

          The parent put the quote in a code block instead of in a blockquote.

        2. 2

          The link that @neozeed posted to archive.org has the same text and is much easier to read on a mobile device.

      6. 2

        Thumbs up @Foxboron. I usually go out of my way to isolate facebook into a separate browser. I do have to say that this content was worth the facebook tax.

    11. 1

      Rewriting Redis in a language other than C might be a worthwhile endeavor, but why Ruby? I looked for an answer to the question on the landing page or hints at it from chapter titles. Finding none, I bounced out…

      1. 3

        Who is this for? is in big bold letters above the fold even on my phone.

        taken from the author’s opening statements:

        Anyone who worked with a web application, regardless of the language, should have enough experience to read this book.

        Considering Ruby is one of the mostly widely comprehended web languages with a very strong stdlib for TCP, threads, and a lot of the other things he would need to do this, it seems ideal to illustrate concepts.

        I appreciate the author taking the time to step through technology that powers a lot of the underlying systems web developers use with a language they are probably more comfortable groking, surely if this was more common the web would be a better place.

      2. 1

        For educational purposes?

    12. 1

      This is also something I’ve been using Ecto for (and validating a JSON body). It works pretty well and it’s nice to have to learn One Less Thing sometimes.

        1. 1

          The technique is so simple that I doubt it needs a separate reusable library on top of Ecto. Just implement a module with the schema and validator functor for each type you want to validate. E.g. here’s a code extract from one of my experiments a few years ago. It looks remarkably similar to OP:

          defmodule RedemptionApiWeb.CaptureRequest do
            use Ecto.Schema
            import Ecto.Changeset
            alias RedemptionApiWeb.Request
          
            @required [:redemption_id, :member_id, :program_id, :miles]
          
            schema "CaptureRequest" do
              field(:redemption_id, :string)
              field(:member_id, :string)
              field(:program_id, :string)
              field(:miles, :integer)
            end
          
            @spec validate(%{
                    redemption_id: String.t(),
                    member_id: String.t(),
                    program_id: String.t(),
                    miles: String.t()
                  }) :: Request.t()
            def validate(params),
              do:
                %__MODULE__{}
                |> cast(params, @required)
                |> validate_required(@required)
                |> validate_length(:redemption_id, min: 1)
                |> Request.validate_common()
          end
          
    13. 23

      I’ve been tinkering with copilot in some side projects and and for anything beyond simple stuff it is has proven not just bad, but dangerous. As a finite example, it seems to have been reinforced to use innerHTML basically for everything, even displaying simple text. Everything it generates needs to be incredibly thoroughly reviewed.

      To me, the whole benefit of this tool was to help provide a sort of feedback loop to junior developers to help them think about different ways they could solve a problem. Instead you need a deep understanding of what suggestions are or are not appropriate, basically the exact opposite outcome.

      I foresee a lot of auto-generated security issues if people start using this sort of software as a way to reduce cost of development

    14. 2

      I don’t think the author does a good job of showcasing this, and likely to be an unpopular opinion here, but having quite a bit of experience developing stuff in Javascript at this point, there is a wisdom in having a falsy version of primitives. It both eliminates needless bits and when used properly can eliminate a whole tree of Maybe types.

      Let’s say you are given this code contrived (typescript) code just to illustrate what I mean:

      type Person = 
       { name: string
       ; title    : string
       } 
      const titleIsAbbreviated (person : Person) : boolean => person.title.includes(".")
      const hasTitle = (person : Person) : boolean => !!person.title
      // from some form or whatever
      const sarah = {name: "Sarah", title: ""}
      hasTitle(sarah) // => false
      titleIsAbbreviated(sarah) // => false
      

      Many google javascript SDKs are very good about being cognizant of this, even if I have have other issues with their quality. By removing undefined | null | string and always having the type be string you can vastly reduce the number of problems. This is in fact very common in core browser apis for precisely this reason, which unfortunately most people are very unfamiliar with these days.

      For example:

      const audio = document.createElement("audio")
      audio.currentSrc // => ""
      audio.currentSrc.startsWith("blob:") // this is still safe because it's a string!
      

      tldr; coercion is a powerful tool and with great power comes great responsibility

    15. 4

      Thanks for sharing, was a nice read, even if it felt like the writer and I share a lot of positions on stuff and echo chambers can be dangerous for growth :slight_smile:.

      The only fact I would say that is completely wrong is:

      Svelte and Typescript are very useful, but the cost of building is always more than you think, especially once the codebase grows.

      The difference in cost of building additional features in Typescript vs Javascript is exactly zero, the costs are still there. The question is do you want your tooling to tell you about all the costs of making a change or do you have to discover, remember, and track them yourself? I know which one I would chose.

      1. 1

        I think he was referring to time it takes to compile typescript to JavaScript (build time, not time to build/add new features). As opposed to files on your computer that your browser already understands and somehow reloads straight away when edited. He’s saying there’s a trade off between having a tight feedback loop and whatever benefits you get from your compiler/we pack/whatever.

        1. 1

          in my experience it still takes more time to verify non-trivial behaviors versus your compiler telling you, “hey, you just messed this up”, and there is significantly more chance a bug makes it from your localhost to production. that is not free in any sense.

          that tight loop might be cool for your first couple of features, but as they get more complex you’re going to end up chasing ghosts, i will never start a new project in anything but typescript going forward myself, because anything interesting invariably gets rewritten in typescript anyway.

    16. 1

      Pretty cool article from Google, thanks for sharing, was a great walk-through of developing something over time and how to consider getting buy in from your colleagues.

      Changing cached lookup by removing the cache and always recalculating yields functionally equivalent code, undetectable by conventional testing.

      I don’t know what conventional testing is considered at Google, but this is trivially detectable in a property-based test by benchmarking the first couple of runs and comparing the property of their runtimes, since your first run should always be the slowest. I kind of find it surprising they don’t do stuff like this, it doesn’t need some big property-based framework to do or anything.

    17. 9

      The whole blog is dedicated to bashing Proctorio, how weird

      1. 17

        Eh. It claims to be about “exam spyware analysis” and bashes the similarly named ProctorTrack too. I don’t find it too weird; if I were forced to use such software, I would probably inspect it and might go so far as to write a blog complaining about it, if reporting the problems I saw in other ways brought no joy.

      2. 9

        I am forced to use Moodle when I teach and I have definitely considered launching a novelty Twitter account just to bash the software. Not so much because I think it would matter, but because doing so might be cathartic… :-)

        1. 4

          And Moodle while clumsy to death is one of the less bad of the bunch by my small experience.

        1. 4

          He (or she) created a blog just for bashing a single company. Even the domain is “proctor.ninja”. Maybe they are an employee, maybe they thinks Proctorio is the worst evil, and maybe it is, but they definetly has a grudge.

          1. 10

            Proctor io has a history of suing experts who critique their shady practices. I would probably attempt to remain incognito if I was making these claims as well

      3. 1

        There was an entire blog dedicated to how xkcd sucks. It seems like there are … several now. The original (with the hyphen) was hilarious.

        There’s also an entire mastodon instance dedicated to SalesForce fandom .. or at least it seems at first. It’s difficult to tell if they’re really fans, making fun of it ironically or a little of both.

      4. 1

        Wait until you find out about https://twitter.com/memenetes. They even sell merch about bashing Kubernetes! XD

    18. 8

      I love crystal, it’s got a reasonably strong stdlib, and once it compiles I’m reasonable confident it will run for a very, very long time. Most importantly, the compiler is helpful and not pedantic. I have several cron jobs/data processors that I rarely touch which have been running for > 2 years now written in Crystal without a single issue, and most importantly whenever someone asks me to add a feature, it takes me all of 0.5s to figure out where I just borked something after adding the feature despite having not touched the code for 6+ months.

      1. 6

        …once it compiles I’m reasonable confident it will run for a very, very long time.

        This is usually seen as a negative. People want their programs to be fast.

        /s

      2. 1

        Are those jobs for work or for pleasure? If work, do you use Crystal at work frequently?

        1. 1

          A bit of both, but in this case I was specifically taking about work. I wouldn’t say frequently, but if it’s something I’m going to be the sole maintainer of for the foreseeable future, it’s in the mix

          1. 1

            That’s pretty cool! How did you get a pre-1.0 language approved?

            1. 7

              How did you get a pre-1.0 language approved?

              That’s just not really how our company culture works, it’s less of “what can I get approved?” and more of “Am I comfortable in asking this of my teammates?”.

              I also think it’s a bit unfair to say pre-1.0 Crystal wasn’t production ready, maybe I’m jaded by having been in Browserland so long now, but there are many popular, production ready javascript libraries out there that I trust less than pre-1.0 Crystal to work properly between updates.