My bank’s password requirements for their “internet password” is laughable; 8 characters max., only lowercase alphanumeric characters. It gave me so many headaches to figure out which characters they actually accepted because they didn’t have any hints on what are the actual requirements…
It’s always the financial institutions*. An 8 character max is truly incredible. I don’t know much about storing passwords, but that kind of maximum makes me rather suspicious that they might be storing plaintext passwords…
That reminds me that I’ve had to enter my password for Fidelity over the phone using T9 codes (i.e. the chars “abcABC” all map to 2) and mapping all special characters to *. I know that they could be hashing the “simplified form” and storing it alongside my regular password when they receive it, but it certainly made me suspicious that they, too, were storing my password in plaintext. Of course they also have some archaic 16 character limit.
* which ironically are the ones for which I want the strongest password!
Thank you for putting my mind at ease over their strange password policies. I can feel justified in being upset and still reasonably confident in the security of my financial accounts :)
I feel like given how common “identity theft” is (people successfully tricking banks into thinking they’re you so the bank will authorize sending them money, which through linguistic judo turns a theft from the bank into a theft from you personally) this blog post isn’t really that strong of an argument. Especially the part about how the bank will pay for any “unauthorized transfers”. Good luck with that. Doubt the bank has ever paid out, and not because their security is that good.
Most identity fraud does not involve guessing banking credentials, it involves either compromising the endpoint or going through a weaker reset path. Strong passwords are intended to protect against offline attacks. If an attacker can get a server’s password database, then weak passwords may require only a few hundreds of thousands of guesses to get, which may be only a few seconds of compute time. For something like a bank, if an attacker is in a position to steal the password database, they are in a position to do a lot more (financial) damage.
it’s horrible! the other bank i use (which is the one i get my salary deposited onto) luckily has (slightly) better limits.. only 10 characters, but requires you to have at least 2 special characters and allows for uppercase.. but still leaves to desire
BBVA maximum is 6. Whenever you need to do something on the phone with them, they’ll ask for a couple of characters in random positions, but because the operators error once, I know they can see the whole password the whole time.
For me, the issue is reproducibility. One thing that has been the bane of my life are flaky tests (for whatever reason). So something that deliberately adds to this flakiness doesn’t help.
Whenever I do tests which include randomness (for instance Property based tests https://scalacheck.org/) I always end up specifying the seed so I can reproduce any issues.
RSpec (and I suspec most test frameworks) lets you specify seed for a run. When not specified it will pick one at random. So by default you have your tests ran in random order but when you have a flaky test you can specify seed and run tests in that particular order to debug it. Best of both worlds.
If you’re able to reproduce test failures (by failures returning the seed and runs able to take the seed as a parameter), and if you treat failures as actual bugs to fix (whether that be a bug in the application or a bug in the tests), then I don’t have a problem with randomised tests that pass sometimes and fail other times.
After all, the point of a testsuite is to find bugs, not to be deterministic. So if randomness is an effective way to find bugs, then I’m all for it!
After all, the point of a testsuite is to find bugs, not to be deterministic. So if randomness is an effective way to find bugs, then I’m all for it!
The issue I would bring up in the case of random test order is that while it does find bugs, the bugs it finds are often not in the application code under test – instead it tends to turn up bugs in the tests. And gets from there into the debate about cost/benefit tradeoffs. If putting in the time and effort (which aren’t free) to do testing “the right way” offers only small marginal gains in actual application code quality over doing a bare-minimum testing setup – and often, the further you go into “the right way” the more you encounter diminishing returns on code quality – then should you be putting in the time and effort to do it “the right way”? Or do you get a better overall return from a simpler testing setup and spending that time/effort elsewhere?
Tests are code. Code can have bugs. Therefore, tests can have bugs.
But as with any bug, it’s hard to say in general case what consequences are. Maybe it’s just a flaky test. Maybe it’s a bug that masks a bug in the application code.
You’re also right that software quality is on a spectrum. A one-off script in bash probably doesn’t need any tests. And formal correctness proof is very time/money-expensive. Another blog engine probably doesn’t need that level of correctness.
Of all the things one can do to improve software quality (including test code quality) running tests in random order is not that expensive.
For me, the issue is reproducibility. One thing that has been the bane of my life are flaky tests (for whatever reason). So something that deliberately adds to this flakiness doesn’t help.
Admittedly a truism, but: If the “test” is flaky, then it is not a test.
Whenever I do tests which include randomness (for instance Property based tests https://scalacheck.org/) I always end up specifying the seed so I can reproduce any issues.
Exactly. This is very good practice.
I have taken it a step further: When I build randomized tests, I have the failing test print out the source code of the new test that reproduces the problem.
Catch2 always prints its seed, so you can let it generate a random seed for your normal test runs, but if a test fails in CI or something, you can always look back at the log to see the seed and use that for debugging.
If you can’t reproduce a failing test because of randomness, that’s a failure of the test runner to give you the information you need.
Really fantastic video for someone like me who has very limited formal math knowledge but quite a bit of programming experience - it’s so good because it goes into detail about the concepts and the way the approaches are being constructed with just the right amount of background and context. Really enjoyed it, thanks for posting.
I once had a very confusing conversation with a recruiter who wanted 15 years of Python experience. That’s technically possible of course, but not really for $50/hr part-time contracts. I actually asked her if she found anyone, and she said yes, she found the guy who literally wrote the book on Python! Because this was in a smaller city, I pretty quickly googled him… he had written a self-published book on Python… and technically had done some Python work 15 years before…
If someone gets really particular about years of experience, it’s best to just run the other way!
One pattern that has worked well for us is what we call the “instrumentation” pattern (bad name, can’t think of a better one). For instance, we use it for logins, where we want to 1) log something, 2) write an audit trail and 3) update a JMX metric. So for the class Session (session management) we have a SessionInstrumentation class containing (for instance) a failedLogin(String username) method. This does everything necessary, and keeps the Session class itself clean. The business logic stays in Session, the logging/auditing/metrics are in the SessionInstrumentation class
Oh, “outside of archives” … well, the unofficial FAQ of the record label 4AD was originally put together by me in the late ‘80s and is still online after going through a number of other owners.
The image on the (disused) home page of my personal website is a picture of my then-house takin in 1994 or so.
Executable documentation on how to use this api. Always up to date with worked examples. Conversely, if you find them hard to read and understand and use as such…. that’s telling you something important about your unit test design.
Compelling cases. They’re a list of use cases / required behaviours that compel the design and implementation. As a trivial example, testing !even(1) and even(2) are compelling. Anymore test cases are superfluous to compel the design.
Compelling API design. If an API is hard to test, it is hard to use and reuse. Fix your API design.
Excluding test cases. eg. There are an infinite supply if integer sequences http://oeis.org/ which do not have 1 and do have 2. Adding test case that prove our implementation excludes large swathes of likely wrong implementations. eg. !even(1999) even(4000)
Corner cases. !even(0), !even( UINT_MAX)
Property tests. for 100 random i {even(i) xor even(i+1)}
Defect localization. With a superbly designed api and unit test suite, tell me which test failed, and I will immediately tell you the file and line number of the bug.
Capturing and locking required behaviour to prevent regressions.
testing exceptional conditions, which is usually very hard to do with integration tests
speed of execution, to speed up the feedback loop. Integration tests may require a server, which needs to start up, they need an environment. Unit tests give you the opportunity to test your code more quickly.
I’d add, perhaps at the top: to force you to make all your dependencies explicit. This is ofc related to the bullet you called “Compelling API Design.”
Another important point not mentioned in the article: To have a fast test harness. As you’re refactoring, or just developing, you can have near instant feedback when you’ve broken something with a good UT suite. This isn’t feasible with integration or e2e tests. So I’d also add “enables developer productivity.”
They don’t pay until there’s regulations or court-level liability for them. They also don’t care until that’s true. Both embedded market in general and IoT are terrible places to try to sell people on security. Also remember that the whole reason that 8- and 16-bitters still sell to tune of billions of dollars is that management on buying end wanted to shave every dollar or penny they could per unit. It’s all extra profit for them. That’s all they care about.
There are occasionally companies using secure hardware or software as a differentiator. I’ve seen switches using INTEGRITY-178B, embedded devices using Java processors, a Bitcoin wallet with tamper-resistant MCU, a workstation hypervisor in SPARK Ada, GENU builds a product line on OpenBSD, and so on. Outside defense contractors, most of them go out of business eventually or barely scrape by. People buy the insecure stuff instead.
When people are conditioned to buy knockoffs on amazon because “cheap electronics”, they don’t even know who the reseller is, let alone the manufacturer of the device.
I bet someone with the original could get the paper and ink to respond differently under different lighting. But on a scanned copy, you’d have to have a quite good, sensitive color scan to see it.
As these are in most printers, your question sounds like the setup for a great afternoon experiment. I hope to upvote your blog post about it. :)
Secondly, why wasn’t this tested? You need to test disaster recovery scenarios. Especially for something as important as this.
Thirdly, I don’t like the leaking “A leaked BA email last week had pointed the finger at a single contractor.” Let’s blame someone else. I can see someone being fired for this.
Have you considered support for remote+dynamic config? I’ve a co-developed such a thing for Java (https://github.com/irenical/jindy) but we never found a Node equivalent. In the Java ecosystem a lot of the pieces were already there, so this is basically an API and glue.
If one would want to centralise configuration using Consul, etcd or what have you, what would you suggest?
What about if you need to react to runtime config changes?
Do you see these features as something that would make sense in HConfig? I’m really trying to avoid having to write a “jindy.js” from scratch.
Hmm, I don’t know if that would make sense in HConfig. It’s more supposed to just be a parser for a language. However, it would totally be possible to build something on top of that which uses HConfig as a configuration language. Reacting to runtime config changes would probably also be up to the application or another language to implement; just parse the file again whenever you want to reload it, and do whatever application logic is necessary to incorporate the updated configuration.
We use HOCON, which, used with the typesafe config jars, has wonderful support for overriding values. The idea is that each jar can have it’s own reference.conf, and then you can add a new config file with a fallback to the default. So you can read config from consul or etcd and then use it to override the default values, We use typesafe config a lot and it makes our lives a lot easier. We use a single object which contains all of the config values which we care about. This is immutable and typed. If we need to re-read (because of changes in consul), then we re-read and we have a new immutable object.
Over the years, there have been a number of these “warts of Scala, problems with Scala, look at how bad Scala is”. I think it is really healthly to have these debates, and it is a sign of a language that people care about. Just to be clear, I am *not” attacking Haoyi in any way.
Does this happen to the same extent in other programming communities/languages?
If not, what makes Scala special in this regard? Is it where the language comes from (EPFL/Martin Odersky), what Scala is trying to do (OO / functional hybrid), or just the people that are in the community?
Not OP, but the issue with CanBuildFrom (and similar “magic cookies” like ExecutionContext) is that they not only force implementations of the collections API into one specific approach, but also conflate the data you put in, the transformations you intend to run and how these operations are executed.
That’s why Scala’s collections will never be remotely efficient – CanBuildFrom prescribes that all implementations are supposed to do completely pointless work.
That design approach is also the main reason of the bizarre proliferation of almost identical, but incompatible APIs in Scala.
Popularity? Scala is probably the most popular[1] advanced language. Haskell has these debates too, but it’s not as popular, so they’re not as visible outside the community.
Such discourse is absolutely necessary for the advancement of the language, but to those not familiar to the language or the discourse, this discourse gives the impression that everything sucks and things are hopeless. But in reality, for both Scala and Haskell, things are looking up.
[1] speculation based on stackoverflow careers job postings + TIOBE + github
My bank’s password requirements for their “internet password” is laughable; 8 characters max., only lowercase alphanumeric characters. It gave me so many headaches to figure out which characters they actually accepted because they didn’t have any hints on what are the actual requirements…
It’s always the financial institutions*. An 8 character max is truly incredible. I don’t know much about storing passwords, but that kind of maximum makes me rather suspicious that they might be storing plaintext passwords…
That reminds me that I’ve had to enter my password for Fidelity over the phone using T9 codes (i.e. the chars “abcABC” all map to 2) and mapping all special characters to *. I know that they could be hashing the “simplified form” and storing it alongside my regular password when they receive it, but it certainly made me suspicious that they, too, were storing my password in plaintext. Of course they also have some archaic 16 character limit.
* which ironically are the ones for which I want the strongest password!
Hi. Troy Hunt has a good take on this and why it doesn’t matter.
https://www.troyhunt.com/banks-arbitrary-password-restrictions-and-why-they-dont-matter/
Thank you for putting my mind at ease over their strange password policies. I can feel justified in being upset and still reasonably confident in the security of my financial accounts :)
I feel like given how common “identity theft” is (people successfully tricking banks into thinking they’re you so the bank will authorize sending them money, which through linguistic judo turns a theft from the bank into a theft from you personally) this blog post isn’t really that strong of an argument. Especially the part about how the bank will pay for any “unauthorized transfers”. Good luck with that. Doubt the bank has ever paid out, and not because their security is that good.
Most identity fraud does not involve guessing banking credentials, it involves either compromising the endpoint or going through a weaker reset path. Strong passwords are intended to protect against offline attacks. If an attacker can get a server’s password database, then weak passwords may require only a few hundreds of thousands of guesses to get, which may be only a few seconds of compute time. For something like a bank, if an attacker is in a position to steal the password database, they are in a position to do a lot more (financial) damage.
it’s horrible! the other bank i use (which is the one i get my salary deposited onto) luckily has (slightly) better limits.. only 10 characters, but requires you to have at least 2 special characters and allows for uppercase.. but still leaves to desire
BBVA maximum is 6. Whenever you need to do something on the phone with them, they’ll ask for a couple of characters in random positions, but because the operators error once, I know they can see the whole password the whole time.
If anyone is interested in what the Collatz conjecture is, I can recommend the following Numberphile videos:
UNCRACKABLE? The Collatz Conjecture https://www.youtube.com/watch?v=5mFpVDpKX70
Collatz Conjecture in Color https://www.youtube.com/watch?v=LqKpkdRRLZw
Random and parallel by default!
Parallel, yes, random, not so certain.
For me, the issue is reproducibility. One thing that has been the bane of my life are flaky tests (for whatever reason). So something that deliberately adds to this flakiness doesn’t help.
Whenever I do tests which include randomness (for instance Property based tests https://scalacheck.org/) I always end up specifying the seed so I can reproduce any issues.
RSpec (and I suspec most test frameworks) lets you specify seed for a run. When not specified it will pick one at random. So by default you have your tests ran in random order but when you have a flaky test you can specify seed and run tests in that particular order to debug it. Best of both worlds.
If you’re able to reproduce test failures (by failures returning the seed and runs able to take the seed as a parameter), and if you treat failures as actual bugs to fix (whether that be a bug in the application or a bug in the tests), then I don’t have a problem with randomised tests that pass sometimes and fail other times.
After all, the point of a testsuite is to find bugs, not to be deterministic. So if randomness is an effective way to find bugs, then I’m all for it!
The issue I would bring up in the case of random test order is that while it does find bugs, the bugs it finds are often not in the application code under test – instead it tends to turn up bugs in the tests. And gets from there into the debate about cost/benefit tradeoffs. If putting in the time and effort (which aren’t free) to do testing “the right way” offers only small marginal gains in actual application code quality over doing a bare-minimum testing setup – and often, the further you go into “the right way” the more you encounter diminishing returns on code quality – then should you be putting in the time and effort to do it “the right way”? Or do you get a better overall return from a simpler testing setup and spending that time/effort elsewhere?
Tests are code. Code can have bugs. Therefore, tests can have bugs.
But as with any bug, it’s hard to say in general case what consequences are. Maybe it’s just a flaky test. Maybe it’s a bug that masks a bug in the application code.
You’re also right that software quality is on a spectrum. A one-off script in bash probably doesn’t need any tests. And formal correctness proof is very time/money-expensive. Another blog engine probably doesn’t need that level of correctness.
Of all the things one can do to improve software quality (including test code quality) running tests in random order is not that expensive.
Admittedly a truism, but: If the “test” is flaky, then it is not a test.
Exactly. This is very good practice.
I have taken it a step further: When I build randomized tests, I have the failing test print out the source code of the new test that reproduces the problem.
Catch2 always prints its seed, so you can let it generate a random seed for your normal test runs, but if a test fails in CI or something, you can always look back at the log to see the seed and use that for debugging.
If you can’t reproduce a failing test because of randomness, that’s a failure of the test runner to give you the information you need.
Have you tried rr? Especially with its chaos mode enabled it’s really helpful for fixing nondeterministic faults (aka bugs).
Do you mean this url?
I do! (oops)
If you’re interested in visualisations of PI, see Martin Krzywinski http://mkweb.bcgsc.ca/pi/ He recently did some music based on Pi with Gregory Coles, Numberphile2 did a podcast on it: https://www.youtube.com/watch?v=JXyO8GB_mkw (The first and last digits of PI)
Really fantastic video for someone like me who has very limited formal math knowledge but quite a bit of programming experience - it’s so good because it goes into detail about the concepts and the way the approaches are being constructed with just the right amount of background and context. Really enjoyed it, thanks for posting.
3Blue1Brown is always good.
I once had a very confusing conversation with a recruiter who wanted 15 years of Python experience. That’s technically possible of course, but not really for $50/hr part-time contracts. I actually asked her if she found anyone, and she said yes, she found the guy who literally wrote the book on Python! Because this was in a smaller city, I pretty quickly googled him… he had written a self-published book on Python… and technically had done some Python work 15 years before…
If someone gets really particular about years of experience, it’s best to just run the other way!
Some people have 20 years of experience in a technology. Others have the same year twenty times.
One pattern that has worked well for us is what we call the “instrumentation” pattern (bad name, can’t think of a better one). For instance, we use it for logins, where we want to 1) log something, 2) write an audit trail and 3) update a JMX metric. So for the class Session (session management) we have a SessionInstrumentation class containing (for instance) a failedLogin(String username) method. This does everything necessary, and keeps the Session class itself clean. The business logic stays in Session, the logging/auditing/metrics are in the SessionInstrumentation class
Patching things because of a log4j CVE…..
Not exactly “Web” presence, but as (some) old Usenet posts are accessible on the WWW, mine dates from October 1990.
Google used to have archives of a small fraction of my Usenet and ARPAnet posts from the mid-80s … OK, here’s a post to SF-LOVERS about the recent film “Real Genius”.
Oh, “outside of archives” … well, the unofficial FAQ of the record label 4AD was originally put together by me in the late ‘80s and is still online after going through a number of other owners.
The image on the (disused) home page of my personal website is a picture of my then-house takin in 1994 or so.
Hail Bob!
I was able to find some Usenet postings of mine from 1995, but I couldn’t find any prior, which is a shame.
Same here. Apr 1990. https://groups.google.com/g/alt.mud/c/dHrSKkpSzYI/m/cvNeJ6jzv1wJ
Just for info, I have no connection with this: https://www.manning.com/books/math-for-programmers
In order of importantance…
Executable documentation on how to use this api. Always up to date with worked examples. Conversely, if you find them hard to read and understand and use as such…. that’s telling you something important about your unit test design.
Compelling cases. They’re a list of use cases / required behaviours that compel the design and implementation. As a trivial example, testing !even(1) and even(2) are compelling. Anymore test cases are superfluous to compel the design.
Compelling API design. If an API is hard to test, it is hard to use and reuse. Fix your API design.
Excluding test cases. eg. There are an infinite supply if integer sequences http://oeis.org/ which do not have 1 and do have 2. Adding test case that prove our implementation excludes large swathes of likely wrong implementations. eg. !even(1999) even(4000)
Corner cases. !even(0), !even( UINT_MAX)
Property tests. for 100 random i {even(i) xor even(i+1)}
Defect localization. With a superbly designed api and unit test suite, tell me which test failed, and I will immediately tell you the file and line number of the bug.
Capturing and locking required behaviour to prevent regressions.
This creates some cognitive dissonance with the V-model. Are unit tests part of testing or part of the documentation?
I guess the logicians answer is fitting: Yes.
I would add:
So much this. The human brain can stay so much more engaged and learn so much faster when the feedback loop is tight.
Unfortunately lobste.rs no longer let’s me edit my post, otherwise I would add yours and /u/thirdtruck and /u/jonahx ’s points as well.
Good list.
I’d add, perhaps at the top: to force you to make all your dependencies explicit. This is ofc related to the bullet you called “Compelling API Design.”
Another important point not mentioned in the article: To have a fast test harness. As you’re refactoring, or just developing, you can have near instant feedback when you’ve broken something with a good UT suite. This isn’t feasible with integration or e2e tests. So I’d also add “enables developer productivity.”
Squirrel. Because I learned during the era of Embedded Squirrel, in Ingres.
Messy by Tim Harford. http://timharford.com/books/messy/ Tim is the presenter of the BBC More or Less programme (http://www.bbc.co.uk/programmes/p02nrss1/episodes/downloads), which I love, and this describes how sometimes the fact that the world is messy can promote creativity, and we shouldn’t be too hung up on trying to make everything tidy.
Hey, if anyone wants to have job security in the future, how about being an IoT security expert?
Trust me when I say this: Nobody wants to pay for IoT security.
Oh, they’re going to pay, one way or the other.
They don’t pay until there’s regulations or court-level liability for them. They also don’t care until that’s true. Both embedded market in general and IoT are terrible places to try to sell people on security. Also remember that the whole reason that 8- and 16-bitters still sell to tune of billions of dollars is that management on buying end wanted to shave every dollar or penny they could per unit. It’s all extra profit for them. That’s all they care about.
There are occasionally companies using secure hardware or software as a differentiator. I’ve seen switches using INTEGRITY-178B, embedded devices using Java processors, a Bitcoin wallet with tamper-resistant MCU, a workstation hypervisor in SPARK Ada, GENU builds a product line on OpenBSD, and so on. Outside defense contractors, most of them go out of business eventually or barely scrape by. People buy the insecure stuff instead.
No - they won’t.
Their customers will, but they won’t.
When people are conditioned to buy knockoffs on amazon because “cheap electronics”, they don’t even know who the reseller is, let alone the manufacturer of the device.
I know this is a really stupid question, but can you get round this by printing on yellow paper?
I bet someone with the original could get the paper and ink to respond differently under different lighting. But on a scanned copy, you’d have to have a quite good, sensitive color scan to see it.
As these are in most printers, your question sounds like the setup for a great afternoon experiment. I hope to upvote your blog post about it. :)
There are a number of interesting things about this story. First is the phrase “Human Error”. For me, this is very similar to the https://www.reddit.com/r/cscareerquestions/comments/6ez8ag/accidentally_destroyed_production_database_on/ story recently. There could be a staged recovery (ou’re not allowed to start all the machines at once for instance)
Secondly, why wasn’t this tested? You need to test disaster recovery scenarios. Especially for something as important as this.
Thirdly, I don’t like the leaking “A leaked BA email last week had pointed the finger at a single contractor.” Let’s blame someone else. I can see someone being fired for this.
The contractors also deny that what BA is claiming is the cause is the cause https://www.theguardian.com/business/2017/jun/02/ba-shutdown-caused-by-contractor-who-switched-off-power-reports-claim
But who knows.
I particularly like QualifierAnnotationAutowireCandidateResolver and SimpleBeanFactoryAwareAspectInstanceFactory as well
Also seems very similar to hocon
Have you considered support for remote+dynamic config? I’ve a co-developed such a thing for Java (https://github.com/irenical/jindy) but we never found a Node equivalent. In the Java ecosystem a lot of the pieces were already there, so this is basically an API and glue. If one would want to centralise configuration using Consul, etcd or what have you, what would you suggest? What about if you need to react to runtime config changes? Do you see these features as something that would make sense in HConfig? I’m really trying to avoid having to write a “jindy.js” from scratch.
Hmm, I don’t know if that would make sense in HConfig. It’s more supposed to just be a parser for a language. However, it would totally be possible to build something on top of that which uses HConfig as a configuration language. Reacting to runtime config changes would probably also be up to the application or another language to implement; just parse the file again whenever you want to reload it, and do whatever application logic is necessary to incorporate the updated configuration.
We use HOCON, which, used with the typesafe config jars, has wonderful support for overriding values. The idea is that each jar can have it’s own reference.conf, and then you can add a new config file with a fallback to the default. So you can read config from consul or etcd and then use it to override the default values, We use typesafe config a lot and it makes our lives a lot easier. We use a single object which contains all of the config values which we care about. This is immutable and typed. If we need to re-read (because of changes in consul), then we re-read and we have a new immutable object.
Over the years, there have been a number of these “warts of Scala, problems with Scala, look at how bad Scala is”. I think it is really healthly to have these debates, and it is a sign of a language that people care about. Just to be clear, I am *not” attacking Haoyi in any way.
Does this happen to the same extent in other programming communities/languages?
If not, what makes Scala special in this regard? Is it where the language comes from (EPFL/Martin Odersky), what Scala is trying to do (OO / functional hybrid), or just the people that are in the community?
We occassionally have threads like this on Haskell Reddit:
https://www.reddit.com/r/haskell/comments/4f47ou/why_does_haskell_in_your_opinion_suck/
As someone who has worked with Scala for the past 5 or 6 years, I’ll say that Scala is just poorly designed.
withFilter
)What’s wrong with implicits, or CanBuildFrom, assuming you’re not referring to the silly signature it gives to functions like
map
?(edit: these are not naïve questions, I’m well aware of their design reasons and know they can be problematic, but I don’t consider them as warts)
Not OP, but the issue with
CanBuildFrom
(and similar “magic cookies” likeExecutionContext
) is that they not only force implementations of the collections API into one specific approach, but also conflate the data you put in, the transformations you intend to run and how these operations are executed.That’s why Scala’s collections will never be remotely efficient –
CanBuildFrom
prescribes that all implementations are supposed to do completely pointless work.That design approach is also the main reason of the bizarre proliferation of almost identical, but incompatible APIs in Scala.
Yeah it’s much more obviously painful when you’ve tried other ML languages, even F# which honestly drinks from a sicker pond.
Popularity? Scala is probably the most popular[1] advanced language. Haskell has these debates too, but it’s not as popular, so they’re not as visible outside the community.
Such discourse is absolutely necessary for the advancement of the language, but to those not familiar to the language or the discourse, this discourse gives the impression that everything sucks and things are hopeless. But in reality, for both Scala and Haskell, things are looking up.
[1] speculation based on stackoverflow careers job postings + TIOBE + github