I will say that one area that the array language influence “stuck” was with CSS. For a while I preferred one line class definitions, with no line breaks between related classes, eg:
But then that made me more receptive to tailwind style utility CSS, so that’s where I am now.
But array languages are so cool, and I really wonder how much is syntactic (terseness as a virtue, all these wonderful little operators), and how much is semantic (working on arrays, lifting operators to work at many dimensions). What would a coffeescript like transpiler from more traditional syntax to, say kdb/q, be like?
IME, the real magic of APL, and what the numerous APL-influenced array languages have consistently lost in translation, are the concatenative, compositional, functional operators that give rise to idiomatic APL. They have taken the common usecases, but forgone the general ones. For example, numpy provides cumsum as a common function, but APL & J provide a more general prefix scan operator which can be used with any function, no matter whether primitive or user-defined, giving rise to idioms like “running maximum” and “odd parity” to name just a couple. Likewise, numpy has inner but it only computes the ordinary “sum product” algorithm while APL & J have the matrix product operator that affords the programmer the ability to easily define all sorts of unusual matrix algorithms that follow the same inner pattern.
This is not even to mention the fantastic sorts of other operators, like the recursive power of verb or the sort-of-monadic under that AFAICT have no near equivalent in numpy.
I doubt brilliance has much to do with it. It’s likely more about exposure to the concepts coinciding with the motivation required to model them in a language or library. Especially in a way that’s accessible to people who don’t have previous exposure. Learning the concepts thoroughly enough to make it simple, and doing the work required to create an artifact people can use and understand is really difficult.
You see similar compositional surprises when looking at some of the Category Theory and Abstract Algebra inspired Haskell concepts. I imagine the current wave of “mainstream” interest in Category Theory will result in these ideas seeping into more common usage, and exposed in ways that don’t require all the mathematical rigor.
It’s important to realize that APL-isms are beautiful, but they are especially striking to people because it’s new to them. Set theory, the lambda calculus, and relational algebra are just some things that have similarly inspired in the past (and continue to do so!) that have spread into common programming to the extent that casual users don’t realize they came from formalized branches of mathematics. In my opinion this is a good thing!
Another exciting thing happening right now is the re-discovery of Forth. It has similar compositional flexibility, but goes about things in a very different way that corresponds to Combinatory logic. I would expect some people are going to reject the Abstract Algebra/Category Theory things as “too far removed from the hardware”, but be jealous of the compositional elegance. This will result in some very excited experimentation with combinatory logic working directly on a stack. Not that this hasn’t been happening in the compiler world with stack machines for decades…but it’s when non-specialists get ahold of things that innovation happens and things get interesting.
I get the sense that there might be some languages or paradigms that might work well (best?) in an embedded context. Perhaps embedding APL into another language as a DSL or a DLL might provide a way to use the language for what it’s good for and avoid the difficult parts.
Posts 19-21, about the folly of Unicode for shortening programs, echo strongly for me. In e.g. Haskell and Agda, there are options for allowing Unicode to intermix with an otherwise-ASCII-only syntax, creating a rich experience. A Lojbanist took this too far recently, creating unreadable emoji soup.
It doesn’t generalize very well. There are Unicode superscripts for each of the digits but to find a superscript “x”, e.g. 2ˣ, you have to find it in a completely unrelated block of characters, which also still doesn’t have the complete superscript alphabet.
Because that was the wrong generalization. Function application should still be explicit instead of using a whole new unicode block for a single power function. Superscripts are better used for denoting special entities like C¹⁴.
I appreciate this run through. My continually relevant tweet from 6 years ago is relevant once again, https://twitter.com/losvedir/status/636034419359289344.
I will say that one area that the array language influence “stuck” was with CSS. For a while I preferred one line class definitions, with no line breaks between related classes, eg:
But then that made me more receptive to tailwind style utility CSS, so that’s where I am now.
But array languages are so cool, and I really wonder how much is syntactic (terseness as a virtue, all these wonderful little operators), and how much is semantic (working on arrays, lifting operators to work at many dimensions). What would a coffeescript like transpiler from more traditional syntax to, say kdb/q, be like?
IME, the real magic of APL, and what the numerous APL-influenced array languages have consistently lost in translation, are the concatenative, compositional, functional operators that give rise to idiomatic APL. They have taken the common usecases, but forgone the general ones. For example, numpy provides cumsum as a common function, but APL & J provide a more general prefix scan operator which can be used with any function, no matter whether primitive or user-defined, giving rise to idioms like “running maximum” and “odd parity” to name just a couple. Likewise, numpy has inner but it only computes the ordinary “sum product” algorithm while APL & J have the matrix product operator that affords the programmer the ability to easily define all sorts of unusual matrix algorithms that follow the same inner pattern.
This is not even to mention the fantastic sorts of other operators, like the recursive power of verb or the sort-of-monadic under that AFAICT have no near equivalent in numpy.
Is there a simple way for other languages to replicate the success, or do the designers just need to be brilliant?
I doubt brilliance has much to do with it. It’s likely more about exposure to the concepts coinciding with the motivation required to model them in a language or library. Especially in a way that’s accessible to people who don’t have previous exposure. Learning the concepts thoroughly enough to make it simple, and doing the work required to create an artifact people can use and understand is really difficult.
You see similar compositional surprises when looking at some of the Category Theory and Abstract Algebra inspired Haskell concepts. I imagine the current wave of “mainstream” interest in Category Theory will result in these ideas seeping into more common usage, and exposed in ways that don’t require all the mathematical rigor.
It’s important to realize that APL-isms are beautiful, but they are especially striking to people because it’s new to them. Set theory, the lambda calculus, and relational algebra are just some things that have similarly inspired in the past (and continue to do so!) that have spread into common programming to the extent that casual users don’t realize they came from formalized branches of mathematics. In my opinion this is a good thing!
Another exciting thing happening right now is the re-discovery of Forth. It has similar compositional flexibility, but goes about things in a very different way that corresponds to Combinatory logic. I would expect some people are going to reject the Abstract Algebra/Category Theory things as “too far removed from the hardware”, but be jealous of the compositional elegance. This will result in some very excited experimentation with combinatory logic working directly on a stack. Not that this hasn’t been happening in the compiler world with stack machines for decades…but it’s when non-specialists get ahold of things that innovation happens and things get interesting.
I resonate with so much of this thread:
In the last two years my sense of (3) has grown more and more. I don’t know what to do with it quite yet.
I get the sense that there might be some languages or paradigms that might work well (best?) in an embedded context. Perhaps embedding APL into another language as a DSL or a DLL might provide a way to use the language for what it’s good for and avoid the difficult parts.
I haven’t tried using it this way, but GNU APL can be built as a library and linked into another project as an embedded scripting language DSL.
Posts 19-21, about the folly of Unicode for shortening programs, echo strongly for me. In e.g. Haskell and Agda, there are options for allowing Unicode to intermix with an otherwise-ASCII-only syntax, creating a rich experience. A Lojbanist took this too far recently, creating unreadable emoji soup.
You don’t have to take it too far, though.
∀(x ∈ S: x² ≤ 10)
?It doesn’t generalize very well. There are Unicode superscripts for each of the digits but to find a superscript “x”, e.g. 2ˣ, you have to find it in a completely unrelated block of characters, which also still doesn’t have the complete superscript alphabet.
Because that was the wrong generalization. Function application should still be explicit instead of using a whole new unicode block for a single power function. Superscripts are better used for denoting special entities like C¹⁴.