I didn’t even notice how short that is, wild. Node.js LTS versions are supported for 3 years, which is still short compared to Java but at least reasonable.
It’s really staggering. I also wonder how smooth upgrades will be from one LTS to the next or, $DIETY forbid, next+1. Moving from one LTS to a new one that’s seen major changes can be a challenge, and with the definition of “long-term” here it may be less disruptive to just keep pace with the weekly releases.
it may be less disruptive to just keep pace with the weekly releases.
I think this has always been true, but then it creates enough work to hire a new person primarily dedicated to the upgrades. Depending on how you feel about supporting workers, this is a boon or just an extra cost. We can see how corporations see it considering widespread use of Java 8 still.
Oh boy, this is the classic OS/2 paradox. Or maybe you can just group it in with the Osborne effect. Now nobody has to worry about Deno because they can just write Node software and be content that it’ll work on both.
I think one of the biggest boons to the Go ecosystem is that C interop is just slow enough to tempt people to rewrite it in Go, but not so slow to be impractical for most purposes.
So it’s not just me? I’m always saddened and surprised every time a project that was “interesting because it’s different” makes this pivot. Usually it’s in the form of “now with POSIX” but that’s just due to my taste for wacky operating systems.
I don’t think so? The explicit goal of WSL has always been to be able to run unmodified Linux software. They tried first with a translation layer with WSL 1, but then they switched that out for running the Linux kernel proper with WSL 2. The result is better Linux software compatibility (allegedly, I don’t use Windows so I can’t confirm), which is an unambiguous success. The value of WSL has never been that it’s an interesting new kind of OS API, it has always been to implement Linux as faithfully as possible.
I thought they were referring to the architecture of WSL1 as being “interesting” as opposed to the now “no longer interesting” VM approach of WSL2?
WSL1 was an actual Windows “subsystem” had a much more interesting architecture than WSL2 (which is just a VM + 9P). (edit: Didn’t see you already mentioned this)
You have it backwards. I don’t have to worry about Node and all its bullshit ecosystems anymore because I can just run deno [whatever] and do anything I need to do. I’m so sick of installing the same pkgs over and over for every project.
To clarify, my comparison to OS/2 went deeper than that; everyone can agree OS/2 was the superior product, but NT beat it because IBM double whammy’d itself by making OS/2 a less appealing product while also only being able to sell itself as being able to do what Windows does (in some ways better, in some ways worse). I feel like Deno is following down the same path where, it’s technically superior, but also doesn’t do enough to compete with it’s predecessor (and this is a common view, I’m sure you’ve seen similar comments in other Deno news articles.)
To clarify, my comparison to OS/2 went deeper than that; everyone can agree OS/2 was the superior product
I know this is a meme, but it’s just not accurate. I might grant that OS/2 was superior to Windows 3, which is probably where that story comes from, but NT? NT was a multiuser system with all the security benefits that entails, could run multiple DOS boxes at once (OS/2 could only do one), had a vastly more pleasant API, had true concurrency in its event queue (while OS/2 was preemptively multitasked, the event queue was cooperative, so an errant program could lock the GUI), had better Unicode support, had a better file system…it was honestly superior in almost every way.
I’m saying all that not to be pedantic, but rather because I think OS/2 is brought out too often as “proof” that having a compatible API is a death knell, whereas I think it failed for a whole pile of other reasons. If you want a better analogy, you might look at how Proton and WINE have largely killed native Linux gaming, or how that plus the Game Porting Toolkit is arguably killing Vulkan.
I think you have the DOS thing backwards, it was NT that could only run one DOS application at one time and OS/2 that could run multiple. It was one of the biggest bragging points about OS/2.
And, yes, I do agree that OS/2’s failure was multi-faceted, which I alluded to in my post. It’s certainly not that OS/2’s failure is only attributed to it’s Windows compatibility, and wasn’t even really a major reason why it failed. But it was a significant straw that broke the camel’s back. That was my point, relying on compatibility with a competing project isn’t the only reason a project can fail, but it’s definitely a bad omen.
it was NT that could only run one DOS application at one time and OS/2 that could run multiple. It was one of the biggest bragging points about OS/2.
Didn’t know that. However, NT’s architecture was fundamentally different from DOS, so if I’m not mistaken it couldn’t actually run DOS applications “natively”. Instead, it would virtualize DOS…and in that sense, I believe it was capable of “running multiple DOS applications at one time”, but it wasn’t actually doing any kind of DOS-level multi-tasking, so I assume this was less efficient. Probably didn’t matter for Windows users at the time (I certainly never noticed, but I really only used the DOS mode to play old video games).
It’s kinda funny to me how IBM marketed this as a “bragging point”, because it unintentionally suggests that OS/2 is simply DOS with a new coat of paint, whereas NT was a completely new green field OS that solved a lot of the problems people had with DOS/UNIX/etc. If I was a developer around that time, I’d probably be a lot more interested in NT as well.
I think you have the DOS thing backwards, it was NT that could only run one DOS application at one time and OS/2 that could run multiple. It was one of the biggest bragging points about OS/2.
No? NT could always run multiple NTVDMs. OS/2 was limited to a single DOS session (“the coffin”) until OS/2 2.0 when it became a 386 thing.
OS/2 wasn’t a superior product (the reliability guarantees are far worse than NT, i.e. a Presentation Manager that’s easy to edge), but it made sense for the systems and compromises of the time (systems with less than 16 MB of RAM, more DOS/Win16 apps than apps designed for a 32-bit world). As those compromises became less relevant, there was less reason to run OS/2 instead of something like 95 or NT.
I actually haven’t heard this. My experience is the opposite. Deno’s tooling, additional standard library, and security features make it well worth it on its own from my perspective!
Deno looks good, I pulled it down and ran a node app with it. All worked just fine.
But I dunno, I suffer from JavaScript fatigue. I don’t want to learn the quirks of a new technology that does the same thing as node.
It may be better, it may not, it’s certainly not as mature. Will it be around in 10 years? I don’t know. Will node be around in 10 years? for certain it will.
I’m rooting for Deno because it lets you just avoid all the stuff that makes Node so fatiguing.
You can just write TS and run it. No NPM, no builders, no CommonJS nonsense.
It has a batteries-included stdlib inspired by Go. Built-in tooling for formatting, linting, testing. And a better security posture: Deno code has to be explicitly authorized to use ENV, fs, and network.
That’s how all TS implementations work, the types are for (optionally) checking your code during development and CI, but they’re just thrown away before being sent to the JS engine of choice.
Backwards compatibility with Node.js and npm, allowing you to run existing Node applications seamlessly
I am not a TS,JS,Deno,Node dev, but I don’t feel like this is conducive to the original “goal” of Deno in the first place? It strikes me as a very weird decision, though I have not been following Deno much since the early days
I guess my overall point is that I thought Deno NOT being Node was the whole point of it in the first place. Someone mentions further up that “no node compat” was poised as a selling point in the announcement.
But yeah, I probably should go back and read years worth of blog posts before having an opinion.
> But yeah, I probably should go back and read years worth of blog posts before having an opinion.
Can you stop doing this ? I now retroactively have the feeling I wasted my time replying to you, as the whole question wasn’t in good faith anyway. Until that part I would have tried to give you a response, but I don’t think you even want that.
Edit: something glitched out - I saw this as a reply to my comment ?!
Yeah not sure what’s happened. I think there has been a few issues with comments being reparented to different threads or something recently. I definitely didn’t reply to you with that lol :)
Edit: The comment I originally replied to has been removed by mods and the user has also apparently been banned. Could have something to do with it?
I get the “eat what you kill” impulse but the main aspect I liked about Deno was a clean break from the painful cruft buildup in Node.
And yes, I can just not use the Node stuff but that’s kind of like saying I can use C++ by just ignoring everything that’s not C. Now every Deno library and project I run across may have a bunch of Node cruft.
I must say that, being away from JS pretty much once jQuery stopped being a thing, it’s really refreshing to see how Deno is handling dependencies. I also liked the approach of Fresh quite a lot, just focusing on routes + islands is so much simpler than the whole boilerplate + configs you would need to start something in Angular/React/Vue.
I’m working on a personal project with Go as a backend, I was considering using HTMX and maybe Templ for the frontend (so I wouldn’t have to deal with Node’s many nonsense layers), but now I want to give Deno + Fresh a go.
Wow, I thought CoffeeScript went out of style about when ES6 happened, because ES6 incorporated many of the nice CoffeeScript features, and by then many people had be ome fed up with CoffeeScript’s footguns.
Indeed. It was a huge hype that died out because quickly because of that first reason. Not sure about the second one. I never really used it on a daily continuous basis. Just sporadically. What were the foot guns?
Truth be told, the need for a compilation step will disqualify it’s as choice in many projects. For example, I still don’t like to use a JavaScript build system and still use script tags. Coffeescript adds unnecessary steps in this case.
Regardless, the syntax has the few key elements to be a joy to write and read. Very few weird characters, indentation based, implicit returns, clear function call syntax a d list comprehension, make it a joy to work with.
I always got the feeling that Coffeescript was quite popular for the crowd who’d rather be programming Ruby or some other language (wasn’t it part of Ruby on Rails even?). Coming from that direction, people might have different expectations.
The JS(-adjacent) mainstream developed into a different direction, though. First of all you’ve got the type hype, where anything without at least an optional/gradual type system can’t be considered serious enough, things moving more into a C#/Java direction than Crockford’s “JS-as-a-scheme” idea or, well, Coffeescript. Then there’s a definite trend to being closer to the target languages and markups. For a while you had all kinds of more abstract HTML generation systems (S-expressions, Phlex, HAML etc.), but at least since JSX became popular, anything without matching angular brackets is sometimes viewed like I used to look at UML code generation. Javascript catching up a bit means that generated code, even without source maps, isn’t too far off its TypeScript origins.
TypeScript being that popular, and even current JS being “good enough” lead to the decline of many transpiled languages, including Elm, Purescript etc.
AWS lambda would be my use case and everything looks neat except the 10s warmup in dockerfile. Doesn’t deno cache do the same without waiting too long or to short?
cache is deprecated. It never really worked for me anyways. The issue is it’s very common for apps to have dynamic imports (fresh has quite a few) which can’t be resolved without actually running the thing. 10s seems a bit arbitrary since it heavily depends on your internet speed, but I vendor dependencies beforehand which speeds it up a lot.
This is great progress, but only in JavaScript world would six months be considered “long-term support”!
Software years are like dog years, but JavaScript years are the dog years of software.
I didn’t even notice how short that is, wild. Node.js LTS versions are supported for 3 years, which is still short compared to Java but at least reasonable.
It’s really staggering. I also wonder how smooth upgrades will be from one LTS to the next or, $DIETY forbid, next+1. Moving from one LTS to a new one that’s seen major changes can be a challenge, and with the definition of “long-term” here it may be less disruptive to just keep pace with the weekly releases.
I think this has always been true, but then it creates enough work to hire a new person primarily dedicated to the upgrades. Depending on how you feel about supporting workers, this is a boon or just an extra cost. We can see how corporations see it considering widespread use of Java 8 still.
Oh boy, this is the classic OS/2 paradox. Or maybe you can just group it in with the Osborne effect. Now nobody has to worry about Deno because they can just write Node software and be content that it’ll work on both.
I think one of the biggest boons to the Go ecosystem is that C interop is just slow enough to tempt people to rewrite it in Go, but not so slow to be impractical for most purposes.
So it’s not just me? I’m always saddened and surprised every time a project that was “interesting because it’s different” makes this pivot. Usually it’s in the form of “now with POSIX” but that’s just due to my taste for wacky operating systems.
Reminded of WSL and WSL2.
I don’t think so? The explicit goal of WSL has always been to be able to run unmodified Linux software. They tried first with a translation layer with WSL 1, but then they switched that out for running the Linux kernel proper with WSL 2. The result is better Linux software compatibility (allegedly, I don’t use Windows so I can’t confirm), which is an unambiguous success. The value of WSL has never been that it’s an interesting new kind of OS API, it has always been to implement Linux as faithfully as possible.
It was “interesting because it’s different” to me, but it is no longer that. Not really a matter of opinion, here.
What did the API differences between Linux proper and WSL do that was interesting, other than just breaking some software?
I thought they were referring to the architecture of WSL1 as being “interesting” as opposed to the now “no longer interesting” VM approach of WSL2?
WSL1 was an actual Windows “subsystem” had a much more interesting architecture than WSL2 (which is just a VM + 9P).(edit: Didn’t see you already mentioned this)MS thought they could do it in reverse. I wonder if it eventually bites them in the rear?
You have it backwards. I don’t have to worry about Node and all its bullshit ecosystems anymore because I can just run
deno [whatever]
and do anything I need to do. I’m so sick of installing the same pkgs over and over for every project.To clarify, my comparison to OS/2 went deeper than that; everyone can agree OS/2 was the superior product, but NT beat it because IBM double whammy’d itself by making OS/2 a less appealing product while also only being able to sell itself as being able to do what Windows does (in some ways better, in some ways worse). I feel like Deno is following down the same path where, it’s technically superior, but also doesn’t do enough to compete with it’s predecessor (and this is a common view, I’m sure you’ve seen similar comments in other Deno news articles.)
I know this is a meme, but it’s just not accurate. I might grant that OS/2 was superior to Windows 3, which is probably where that story comes from, but NT? NT was a multiuser system with all the security benefits that entails, could run multiple DOS boxes at once (OS/2 could only do one), had a vastly more pleasant API, had true concurrency in its event queue (while OS/2 was preemptively multitasked, the event queue was cooperative, so an errant program could lock the GUI), had better Unicode support, had a better file system…it was honestly superior in almost every way.
I’m saying all that not to be pedantic, but rather because I think OS/2 is brought out too often as “proof” that having a compatible API is a death knell, whereas I think it failed for a whole pile of other reasons. If you want a better analogy, you might look at how Proton and WINE have largely killed native Linux gaming, or how that plus the Game Porting Toolkit is arguably killing Vulkan.
I think you have the DOS thing backwards, it was NT that could only run one DOS application at one time and OS/2 that could run multiple. It was one of the biggest bragging points about OS/2.
And, yes, I do agree that OS/2’s failure was multi-faceted, which I alluded to in my post. It’s certainly not that OS/2’s failure is only attributed to it’s Windows compatibility, and wasn’t even really a major reason why it failed. But it was a significant straw that broke the camel’s back. That was my point, relying on compatibility with a competing project isn’t the only reason a project can fail, but it’s definitely a bad omen.
Didn’t know that. However, NT’s architecture was fundamentally different from DOS, so if I’m not mistaken it couldn’t actually run DOS applications “natively”. Instead, it would virtualize DOS…and in that sense, I believe it was capable of “running multiple DOS applications at one time”, but it wasn’t actually doing any kind of DOS-level multi-tasking, so I assume this was less efficient. Probably didn’t matter for Windows users at the time (I certainly never noticed, but I really only used the DOS mode to play old video games).
It’s kinda funny to me how IBM marketed this as a “bragging point”, because it unintentionally suggests that OS/2 is simply DOS with a new coat of paint, whereas NT was a completely new green field OS that solved a lot of the problems people had with DOS/UNIX/etc. If I was a developer around that time, I’d probably be a lot more interested in NT as well.
[Comment removed by author]
No? NT could always run multiple NTVDMs. OS/2 was limited to a single DOS session (“the coffin”) until OS/2 2.0 when it became a 386 thing.
OS/2 wasn’t a superior product (the reliability guarantees are far worse than NT, i.e. a Presentation Manager that’s easy to edge), but it made sense for the systems and compromises of the time (systems with less than 16 MB of RAM, more DOS/Win16 apps than apps designed for a 32-bit world). As those compromises became less relevant, there was less reason to run OS/2 instead of something like 95 or NT.
I actually haven’t heard this. My experience is the opposite. Deno’s tooling, additional standard library, and security features make it well worth it on its own from my perspective!
Deno looks good, I pulled it down and ran a node app with it. All worked just fine.
But I dunno, I suffer from JavaScript fatigue. I don’t want to learn the quirks of a new technology that does the same thing as node.
It may be better, it may not, it’s certainly not as mature. Will it be around in 10 years? I don’t know. Will node be around in 10 years? for certain it will.
I’m rooting for Deno because it lets you just avoid all the stuff that makes Node so fatiguing.
You can just write TS and run it. No NPM, no builders, no CommonJS nonsense.
It has a batteries-included stdlib inspired by Go. Built-in tooling for formatting, linting, testing. And a better security posture: Deno code has to be explicitly authorized to use ENV, fs, and network.
It’s far closer to the browser model which I always liked better than node.
That’s for me a double edged sword. It has all the cool stuff that go has. Why not just use go then?
Then again, if you also have frontend, or a bunch of legacy stuff, then maybe not.
Ecosystem, language and/syntax preferences, security features, browser compatibility, and a host of other reasons!
That’s not to say there aren’t reasons to use Go, of course, but that naturally cuts both ways.
Because it has cool things that Go does not have, like sum types.
I hope our typescript project can make use of it. The tsc integration with vanilla node feels really cumbersome and like a sidecar, if at all.
Node 22 has a flag to run typescript. It strips the types rather than interprets the Typescript code. I wonder is that what deno does too?
That’s how all TS implementations work, the types are for (optionally) checking your code during development and CI, but they’re just thrown away before being sent to the JS engine of choice.
yes, deno turns it into javascript and then passes the javascript to v8.
So… who’s building the Deno fork with no Node compat or JSR?
Why would anyone?
If you don’t want to use these features, just don’t use them. They don’t get in your way if you don’t want them.
People are probably confused because in the Deno announcement talk by Ryan Dahl he really emphasized “no Node compat” as a selling point.
Very disappointed, if not surprised, by this turn. I really want npm to lose.
Haven’t touched Node in a long while… Is this better than Bun?
I would say so, yeah. definitely subjective tho.
I can’t imagine any javascript runtime being better than what Sumner and team are doing with Bun. Here’s a thread disputing Deno’s benchmark results.
I am not a TS,JS,Deno,Node dev, but I don’t feel like this is conducive to the original “goal” of Deno in the first place? It strikes me as a very weird decision, though I have not been following Deno much since the early days
Looks like incremental and seamless transition of existing code as well as integrating the NPM ecosystem turned out to be a higher priority.
Rust without C FFI would be equally dead in the water. Even with all the C-isms you need to handle in unsafe.
[Comment removed by moderator pushcx: Don't pick fights if you don't like a question.]
I guess my overall point is that I thought Deno NOT being Node was the whole point of it in the first place. Someone mentions further up that “no node compat” was poised as a selling point in the announcement.
But yeah, I probably should go back and read years worth of blog posts before having an opinion.
> But yeah, I probably should go back and read years worth of blog posts before having an opinion.Can you stop doing this ? I now retroactively have the feeling I wasted my time replying to you, as the whole question wasn’t in good faith anyway. Until that part I would have tried to give you a response, but I don’t think you even want that.Edit: something glitched out - I saw this as a reply to my comment ?!
Yeah not sure what’s happened. I think there has been a few issues with comments being reparented to different threads or something recently. I definitely didn’t reply to you with that lol :)
Edit: The comment I originally replied to has been removed by mods and the user has also apparently been banned. Could have something to do with it?
Hey thanks for understanding. I am able to reproduce this 100% with the threads view 0. Reported it to pushcx.
Weird yeah I can see what you mean from the threads view. From there it really does look like I replied to you.
I get the “eat what you kill” impulse but the main aspect I liked about Deno was a clean break from the painful cruft buildup in Node. And yes, I can just not use the Node stuff but that’s kind of like saying I can use C++ by just ignoring everything that’s not C. Now every Deno library and project I run across may have a bunch of Node cruft.
I must say that, being away from JS pretty much once jQuery stopped being a thing, it’s really refreshing to see how Deno is handling dependencies. I also liked the approach of Fresh quite a lot, just focusing on routes + islands is so much simpler than the whole boilerplate + configs you would need to start something in Angular/React/Vue.
I’m working on a personal project with Go as a backend, I was considering using HTMX and maybe Templ for the frontend (so I wouldn’t have to deal with Node’s many nonsense layers), but now I want to give Deno + Fresh a go.
It would be cool ornone of these JavaScript thingies would support coffeescript out of the box. Probability my favorite language syntax wise.
Wow, I thought CoffeeScript went out of style about when ES6 happened, because ES6 incorporated many of the nice CoffeeScript features, and by then many people had be ome fed up with CoffeeScript’s footguns.
Indeed. It was a huge hype that died out because quickly because of that first reason. Not sure about the second one. I never really used it on a daily continuous basis. Just sporadically. What were the foot guns?
Truth be told, the need for a compilation step will disqualify it’s as choice in many projects. For example, I still don’t like to use a JavaScript build system and still use script tags. Coffeescript adds unnecessary steps in this case.
Regardless, the syntax has the few key elements to be a joy to write and read. Very few weird characters, indentation based, implicit returns, clear function call syntax a d list comprehension, make it a joy to work with.
I always got the feeling that Coffeescript was quite popular for the crowd who’d rather be programming Ruby or some other language (wasn’t it part of Ruby on Rails even?). Coming from that direction, people might have different expectations.
The JS(-adjacent) mainstream developed into a different direction, though. First of all you’ve got the type hype, where anything without at least an optional/gradual type system can’t be considered serious enough, things moving more into a C#/Java direction than Crockford’s “JS-as-a-scheme” idea or, well, Coffeescript. Then there’s a definite trend to being closer to the target languages and markups. For a while you had all kinds of more abstract HTML generation systems (S-expressions, Phlex, HAML etc.), but at least since JSX became popular, anything without matching angular brackets is sometimes viewed like I used to look at UML code generation. Javascript catching up a bit means that generated code, even without source maps, isn’t too far off its TypeScript origins.
TypeScript being that popular, and even current JS being “good enough” lead to the decline of many transpiled languages, including Elm, Purescript etc.
My link log has
http://ceronman.com/2012/09/17/coffeescript-less-typing-bad-readability/
https://raganwald.com/2013/07/27/Ive-always-been-mad.html
heh, I miss the times when I could pretend I was writing LISP by replacing
( )
with spaces xdAWS lambda would be my use case and everything looks neat except the 10s warmup in dockerfile. Doesn’t deno cache do the same without waiting too long or to short?
https://docs.deno.com/runtime/tutorials/aws_lambda/
cache is deprecated. It never really worked for me anyways. The issue is it’s very common for apps to have dynamic imports (fresh has quite a few) which can’t be resolved without actually running the thing. 10s seems a bit arbitrary since it heavily depends on your internet speed, but I vendor dependencies beforehand which speeds it up a lot.
I have little to no experience with Lambda, but you should checkout Deno Deploy depending on your use case. I’ve found it very slick.
Otherwise, I’m guessing you could tweak the dockerfile? Not too familiar with Lambda and caching.
I wouldn’t. Deno isn’t on there but both node and bun are atrocious here: https://maxday.github.io/lambda-perf/
I’d write in rust and use ECS if perf would be critical but few hundred ms cold start is fine for me
Ah, sure if you’re not after interactive applications then frankly who cares.