This is kind of unfair as it’s a pretty good technical post and only a side-ways comment, but:
Javascript has its fair share of ‘wat’ moments. Even though most of them have a logical explanation once you dig in, they can still be surprising.
Everything has a reason; even the most silly behaviour does; I mean, programming languages don’t get their behaviour from magic or divine intervention. I suppose you could redefine “logical” to “it has a reason”, but that’s not “logical” as in “it makes any kind of logical sense for [1, 2] + 3 to result in the string 1,23”. [1, 2, 3].
Making fun of JavaScript for the standard floating point behaviour is silly and ignorant, but I suppose if there are 100 ridiculous things to genuinely make fun of (that are not infrequently handwaved away with “it has a logical explanation”) then it’s fairly easy to fall in to the trap of “well, this must be JavaScript stupidity number 101!” without any examination. You see the same with e.g. PHP, where there are quite a few “wat” things, but people get carried away and fling all sorts of nonsense its way too.
By the way, here is some more fun stuff with floating points I happened to come across the other day.
Oh, I don’t know, I feel the author has done a pretty good job of explaining that is not a JavaScript problem, but a problem of the standard. They then even brought another couple of languages and examples.
I agree that JavaScript gets a lot of bad reputation because of things that are not under its control, but I didn’t think the article made that point.
Oh, I don’t know, I feel the author has done a pretty good job of explaining that is not a JavaScript problem, but a problem of the standard.
When it comes to floats: sure. But that’s not what I mean. I mean all the other “wat”s that are fairly unique to JavaScript and are certainly not part of any standard (other than the JavaScript/ECMAScript standard that is).
Well, said. You know how sometimes people articulate that having a stupid person on your side of an argument is worse than having nobody on your side? That’s how I feel when someone criticizes JavaScript or PHP for something that isn’t actually wrong or bad! >D
Nice post, but the comparison is a bit inefficient. To check if a/b >= c/d, you simply check if ad >= cb (given b and d are positive by convention, the comparator does not flip) instead of first bringing the ratios to the same denominator, which might lead to really large denominators.
Rationals would be a great default number type. When fractions are available you have to go way out of your way: Fraction(1, 10). Why shouldn’t 0.1 mean exactly one tenth?
Floats can have the ugly syntax: float(0.1). You’re rounding 1/10 to the nearest float.
The problem is that only rationals which divide nicely by 10 have such nice syntax. For example, 1/3 cannot be written down in decimal point notation (as it would be 0.333 followed by an infinite number of threes). So, it makes more sense to use the fractional syntax for rational numbers and the decimal point syntax for floating-point numbers.
Of course, you can have your cake and eat it too: Lisps use exactly this syntax: 1/3 for rational numbers. It’s slightly ugly when you get larger numbers, because you can’t write 1 1/3. Instead, you write 4/3, appears rather unnatural. I think 1+1/3 would’ve been nicer and would have been consistent with complex number syntax (i.e. 1+2i). But it does complicate the parser quite a bit. And in infix languages you can’t do this because of the ambiguity of whether you meant 1/3 or (/ 1 3). But one could conceive a prefix syntax like r1/3 or so.
It’s unfortunate that the floating-point notation us humans prefer to use is base 10, while the representation in a computer is base 2, because these don’t divide cleanly, hence the weirdness of how 0.3 gets read into a float.
notation us humans prefer to use is base 10, while the representation in a computer is base 2, because these don’t divide cleanly, hence the weirdness of how 0.3 gets read into a float
Decimal formats are a thing. Supported by some databases and used for some financial work. Ultimately it doesn’t solve the problem of ‘I want to represent arbitrary fractions with perfect fidelity’. That being said you can go further in that direction by using a more composite number; neither 10 nor 2, but maybe 12.
in infix languages you can’t do this […] prefix syntax like r1/3 or so
Better solution: use a different separator. E.g. in j: 1r3.
When you write 0.3 in Raku, it is considered as a Rational and not a Float that is why 0.2 + 0.1 = 0.3 and division operator convert it also internally as a Rational (3/2).WHAT => Rat or (1/3).WHAT => Rat. Use scientific notation to create a double directly (the type will be Num). For arbitrary precision rational number, you will use FatRat type.
Well, they’re not entirely truthful either – Clojure for instance has solved this issue:
(+ 1/10 2/10) ;; => 3/10
(+ 0.1M 0.2M);; => 0.3M
I get the point of the post, but it seems a tad awkward to point of the failings of languages that have solved this and doesn’t need a custom implementation of ratios…
And Clojure solves it because it tries to follow in the tradition of older schemes/Lisps. I’ve ranted more than once to my colleagues that numbers in mainstream “app-level” (anything that’s not C/C++/Zig/Rust/etc) programming languages are utterly insane.
<soap-box>
Look, yeah- if you’re writing a system binary in C, or developing a physics simulation, or running some scientific number crunching- then you probably want to know how many bytes of memory your numbers will take up. And you should know if/when to use floats and the foot-guns they come with. (Even, then, though- why the HELL do most languages silently overflow on arithmetic instead of exploding?! I don’t want my simulation data to be silently corrupted.)
But for just about everything else, the programmer just wants the numbers to do actual number things. I shouldn’t have to guess that the number of files in a directory will never go above some arbitrary number that happens to fit in 4 bytes. I shouldn’t have to remember that you can’t compare floats because I had the audacity to try to compute the average of something.
We have this mantra for the last decade or so that “performance doesn’t matter”, “memory is cheap”, “storage is cheap”, “computers are fast”, etc, etc, yet our programming languages still ask us to commit to a number variable taking up an exact number of bytes? Meanwhile, it’s running a garbage collector thread, heap allocates everything, fragments memory, etc. Does anyone else think this is insane? You’re gonna heap allocate and pointer-chase all day, but you can’t grow my variable’s memory footprint when it gets too large for 2,4,8 bytes? You’re gonna lose precision for my third-grade arithmetic operations because you really need that extra performance? I don’t know about that…
Even, then, though- why the HELL do most languages silently overflow on arithmetic instead of exploding?! I don’t want my simulation data to be silently corrupted.
Oh man you just dredged up some bad memories. I was working on modifying another grad student’s C++ simulation code, and the performance we were seeing was shocking. Too good, way too good.
Turns out that they’d made some very specific assumptions that weren’t met by the changes I made so some numbers overflowed and triggered the stopping condition far too early.
(Even, then, though- why the HELL do most languages silently overflow on arithmetic instead of exploding?! I don’t want my simulation data to be silently corrupted.)
In an alternate universe:
(Even, then, though- why the HELL do most languages insert all these bounds checks on arithmetic that slow everything down?! I know my simulation isn’t going to get anywhere near the limits of floating point.)
Sure. But isn’t the obvious solution for this to be a compiler flag?
Less obvious is what the default should be, but I’d still advocate for safety as the default. Sure, you’re not likely to wrap around a Double or whatever, but I’m thinking more about integer-like types like Shorts (“Well, when I wrote it, there was no way to have that many widgets at a time!”).
and i’m pretty sure that’s the case with Haskell too
it might be a fair criticism to question why the simplest and most obvious syntax (i.e., no suffix) doesn’t default to arbitrary-precision rationals, as is the case with integers in languages like Ruby, Haskell, etc.
This is kind of unfair as it’s a pretty good technical post and only a side-ways comment, but:
Everything has a reason; even the most silly behaviour does; I mean, programming languages don’t get their behaviour from magic or divine intervention. I suppose you could redefine “logical” to “it has a reason”, but that’s not “logical” as in “it makes any kind of logical sense for
[1, 2] + 3
to result in the string1,23
”.[1, 2, 3]
.Making fun of JavaScript for the standard floating point behaviour is silly and ignorant, but I suppose if there are 100 ridiculous things to genuinely make fun of (that are not infrequently handwaved away with “it has a logical explanation”) then it’s fairly easy to fall in to the trap of “well, this must be JavaScript stupidity number 101!” without any examination. You see the same with e.g. PHP, where there are quite a few “wat” things, but people get carried away and fling all sorts of nonsense its way too.
By the way, here is some more fun stuff with floating points I happened to come across the other day.
Oh, I don’t know, I feel the author has done a pretty good job of explaining that is not a JavaScript problem, but a problem of the standard. They then even brought another couple of languages and examples.
I agree that JavaScript gets a lot of bad reputation because of things that are not under its control, but I didn’t think the article made that point.
When it comes to floats: sure. But that’s not what I mean. I mean all the other “wat”s that are fairly unique to JavaScript and are certainly not part of any standard (other than the JavaScript/ECMAScript standard that is).
Okay. I didn’t feel like these were in focus, I thought most of the stuff was focused on the library.
Well, said. You know how sometimes people articulate that having a stupid person on your side of an argument is worse than having nobody on your side? That’s how I feel when someone criticizes JavaScript or PHP for something that isn’t actually wrong or bad! >D
Nice post, but the comparison is a bit inefficient. To check if a/b >= c/d, you simply check if ad >= cb (given b and d are positive by convention, the comparator does not flip) instead of first bringing the ratios to the same denominator, which might lead to really large denominators.
Rationals would be a great default number type. When fractions are available you have to go way out of your way:
Fraction(1, 10)
. Why shouldn’t0.1
mean exactly one tenth?Floats can have the ugly syntax:
float(0.1)
. You’re rounding 1/10 to the nearest float.The problem is that only rationals which divide nicely by 10 have such nice syntax. For example, 1/3 cannot be written down in decimal point notation (as it would be 0.333 followed by an infinite number of threes). So, it makes more sense to use the fractional syntax for rational numbers and the decimal point syntax for floating-point numbers.
Of course, you can have your cake and eat it too: Lisps use exactly this syntax:
1/3
for rational numbers. It’s slightly ugly when you get larger numbers, because you can’t write1 1/3
. Instead, you write 4/3, appears rather unnatural. I think1+1/3
would’ve been nicer and would have been consistent with complex number syntax (i.e.1+2i
). But it does complicate the parser quite a bit. And in infix languages you can’t do this because of the ambiguity of whether you meant1/3
or(/ 1 3)
. But one could conceive a prefix syntax liker1/3
or so.It’s unfortunate that the floating-point notation us humans prefer to use is base 10, while the representation in a computer is base 2, because these don’t divide cleanly, hence the weirdness of how 0.3 gets read into a float.
Unnatural? Nah. Maybe a bit improper, though.
Matter of opinion.
Decimal formats are a thing. Supported by some databases and used for some financial work. Ultimately it doesn’t solve the problem of ‘I want to represent arbitrary fractions with perfect fidelity’. That being said you can go further in that direction by using a more composite number; neither 10 nor 2, but maybe 12.
Better solution: use a different separator. E.g. in j:
1r3
.True, but I don’t know of any popular programming language which uses them as the native representation of floating point numbers.
How does it distinguish between a division operation on two numbers (which may well result in a rational number) and a rational literal?
REXX?
As far as I know about Raku.
When you write
0.3
in Raku, it is considered as a Rational and not a Float that is why0.2 + 0.1 = 0.3
and division operator convert it also internally as a Rational(3/2).WHAT => Rat
or(1/3).WHAT => Rat
. Use scientific notation to create a double directly (the type will beNum
). For arbitrary precision rational number, you will useFatRat
type.Rational number in Raku from Andrew Shitov course
Floating-point number in Raku from the same course
Holy heck they didn’t half-ass this
Well, they’re not entirely truthful either – Clojure for instance has solved this issue:
I get the point of the post, but it seems a tad awkward to point of the failings of languages that have solved this and doesn’t need a custom implementation of ratios…
And Clojure solves it because it tries to follow in the tradition of older schemes/Lisps. I’ve ranted more than once to my colleagues that numbers in mainstream “app-level” (anything that’s not C/C++/Zig/Rust/etc) programming languages are utterly insane.
<soap-box>
Look, yeah- if you’re writing a system binary in C, or developing a physics simulation, or running some scientific number crunching- then you probably want to know how many bytes of memory your numbers will take up. And you should know if/when to use floats and the foot-guns they come with. (Even, then, though- why the HELL do most languages silently overflow on arithmetic instead of exploding?! I don’t want my simulation data to be silently corrupted.)
But for just about everything else, the programmer just wants the numbers to do actual number things. I shouldn’t have to guess that the number of files in a directory will never go above some arbitrary number that happens to fit in 4 bytes. I shouldn’t have to remember that you can’t compare floats because I had the audacity to try to compute the average of something.
We have this mantra for the last decade or so that “performance doesn’t matter”, “memory is cheap”, “storage is cheap”, “computers are fast”, etc, etc, yet our programming languages still ask us to commit to a number variable taking up an exact number of bytes? Meanwhile, it’s running a garbage collector thread, heap allocates everything, fragments memory, etc. Does anyone else think this is insane? You’re gonna heap allocate and pointer-chase all day, but you can’t grow my variable’s memory footprint when it gets too large for 2,4,8 bytes? You’re gonna lose precision for my third-grade arithmetic operations because you really need that extra performance? I don’t know about that…
</soap-box>
Oh man you just dredged up some bad memories. I was working on modifying another grad student’s C++ simulation code, and the performance we were seeing was shocking. Too good, way too good.
Turns out that they’d made some very specific assumptions that weren’t met by the changes I made so some numbers overflowed and triggered the stopping condition far too early.
In an alternate universe:
Sure. But isn’t the obvious solution for this to be a compiler flag?
Less obvious is what the default should be, but I’d still advocate for safety as the default. Sure, you’re not likely to wrap around a Double or whatever, but I’m thinking more about integer-like types like Shorts (“Well, when I wrote it, there was no way to have that many widgets at a time!”).
same thing with Ruby
and i’m pretty sure that’s the case with Haskell too
it might be a fair criticism to question why the simplest and most obvious syntax (i.e., no suffix) doesn’t default to arbitrary-precision rationals, as is the case with integers in languages like Ruby, Haskell, etc.