I use Dvorak and this is why I love the layout-agnostic keybindings of Emacs. I could never get used to evil-mode or anything. I have ersatz Emacsen (like mg) wherever I can’t have GNU Emacs just so that I can continue to use the familiar keybindings.
By the way, there’s a plugin for Dvorak keybindings in vim.
Only if the compiler has a strategy to precompute arithmetic operations at compile-time. Even then, there are much better optimisations you can do without sacrificing run-time flexibility (check out the link to math/number-theory at the bottom).
And no, R7RS Small only mandates syntax-rules. Implementations independently advocate for syntax-case, explicit/implicit renaming and syntactic closures. But yes, it’s pretty much trivial to unroll loops with a macro system that allows you to break hygiene.
I don’t know how the author formatted them, but vim has rainbow parentheses, and emacs has rainbow delimiters. (The vim version also works on arbitrary delimiters.)
How is that any different from kubernetes and “just restart it”? Its mostly the same practice ultimately, though with a bit more cleanup between failures.
If you want to skip the offending instruction, à la Visual Basics “on error resume next”, you determine instruction length by looking at the code and then increment by that.
Figuring out the length requires understanding all the valid instruction formats for your CPU architecture. For some it’s almost trivial, say AVR has 16 bit instructions with very few exceptions for stuff like absolute call. For others, like x86, you need to have a fair bit of logic.
I am aware that the “just increment by 1” below are intended as a joke. However I still think it’s instructive to say that incrementing blindly might lead you to start decoding at some point in the middle of an instruction. This might still be a valid instruction, especially for dense instruction set encodings. In fact, jumping into the middle of operands was sometimes used on early microcomputers to achieve compact code.
I am not sure what the question is. This implementation in fact does not read more than one character ahead, since as you pointed out, it is unnecessary. In particular, it does not read entire file to memory at once.
@cadey wrote it but didn’t post it. I’m assuming @calvin shared it because he thought it was genuinely interesting content, not to bully V.
It’s not just that V is still shit in spite of bold claims, it’s that V is still shit in spite of bold claims and incredibly toxic behavior by its community. The interplay between all three- toxicity, grandiosity, vaporware- makes this analysis especially interesting to me.
Defending V from misleading attacks – so misleading that the author has just retracted her original post – is not exactly a defensible definition of “toxic”.
I don’t like seeing slander against important projects I care about. Surely you can understand!
If your post would have been factual then I wouldn’t have criticized it…
I hope you are in a better headspace for your future posts. I’m sure many of us would love to read more articles about WasmCloud, which is honestly one of the coolest projects I’ve ever heard of; would solve many important problems at once, and do so without reinventing the wheel.
(Dear readers: FWIW I didn’t ask for the post to be taken down. I was arguing against its content.)
As far as I can tell, compiling 1.2 million lines of code in a second (second bullet point). I would also like to see some citations backing up the safety guarantees with respect to C to V translation. C has way too many gotchas for a bold claim like that. Also, the safety guarantee with the C and Javascript backends.
You can download the project, generate 1 million lines of code using tools/gen_1mil.v and build it in 1 second with v -x64 x.v
“ safety guarantee with the C backend” makes no sense, because you write the code in V, not in C. V won’t allow you globals, or shadowing etc. C can be thought of as an assembly language.
If only this benchmark actually worked. First I compiled the generator using:
./v -prod cmd/tools/gen1m.v
This already takes 1.3 seconds. I then ran the resulting binary as follows:
./cmd/tools/gen1m > big.v
The size of the resulting file is 7.5 megabytes. I then tried to compile it as
follows:
./v -prod big.v
This takes 2.29 seconds, produces 7 errors, and no output binary. This is the
same when using the -64 option, and also when leaving out the -prod option.
Even a simple hello world takes longer than a second in production mode:
v $ cat hello.v
fn main() {
println("Hello world")
}
v $ time ./v -prod hello.v
________________________________________________________
Executed in 1,44 secs fish external
usr time 1372,26 millis 350,00 micros 1371,91 millis
sys time 64,93 millis 28,00 micros 64,90 millis
In debug mode it already takes 600 milliseconds:
v $ time ./v hello.v
________________________________________________________
Executed in 666,51 millis fish external
usr time 613,46 millis 307,00 micros 613,15 millis
sys time 52,61 millis 26,00 micros 52,59 millis
With -x64 a debug build takes about 2 milliseconds.
Based on the above I find it hard to believe V would really be able to compile
over a million lines of code in less than one second; even without optimisations
enabled. I hope I am wrong here.
I agree that it’s unnecessary, though I can’t decide if it’s really bullying.
I’ve heard about V two times since the last post hit lobste.rs. One time I posted Christine’s observations, one time someone else did. I think the message is out there, and at this point, it’s really only noteworthy if something changes.
I received a dead tree copy of SICP for my birthday, half of which I had already read as an e-book, so I’ll be reading that for a while. Looking for a job during the COVID recession is making things tough but I’m trying to consider it an opportunity to go back to working on my silly side-projects. My newest side-project is rewriting the infamous Snoopy calendar Fortran program in Scheme, but with a twist — to give it a feeling of nostalgia, it calculates a year in the past century with an identical calendar set up as the current year and displays that. (eg It displays the calendar of 1964 in 2020.)
Isn’t there a difference between functional code and side-effect-free code? I feel like, by trying to set up all of the definitions just right, this article actually misses the point somewhat. I am not even sure which language the author is thinking of; Scheme doesn’t have any of the three mentioned properties of immutability, referential transparency, or static type systems, and neither do Python nor Haskell qualify. Scouring the author’s websites, I found some fragments of Java; neither Java nor Clojure have all three properties. Ironically, Java comes closest, since Java is statically typed in a useful practical way which has implications for soundness.
These sorts of attempts to define “functional programming” or “functional code” always fall flat because they are trying to reverse-engineer a particular reverence for some specific language, usually an ML or a Lisp, onto some sort of universal principles for high-quality code. The idea is that, surely, nobody can write bad code in such a great language. Of course, though, bad code is possible in every language. Indeed, almost all programs are bad, for almost any definition of badness which follows Sturgeon’s Law.
There is an important idea lurking here, though. Readability is connected to the ability to audit code and determine what it cannot do. We might desire a sort of honesty in our code, where the code cannot easily hide effects but must declare them explicitly. Since one cannot have a decidable, sound, and complete type system for Turing-complete languages, one cannot actually put every interesting property into the type system. (This is yet another version of Rice’s theorem.) Putting these two ideas together, we might conclude that while types are helpful to readability, they cannot be the entire answer of how to determine which effects a particular segment of code might have.
Edit: Inserted the single word “qualify” to the first paragraph. On rereading, it was unacceptably ambiguous before, and led to at least two comments in clarification.
I will clarify the point, since it might not be obvious to folks who don’t know Haskell well. The original author claims that two of the three properties of immutability, referential transparency, and “typing” are required to experience the “good stuff” of functional programming. On that third property, the author hints that they are thinking of inferred static type systems equipped with some sort of proof of soundness and correctness.
Haskell is referentially transparent, but has mutable values and an unsound type system. That is only one of three, and so Haskell is disqualified.
Mutable values are provided in not just IO, but also in ST and STM. On one hand, I will readily admit that the Haskell Report does not mandate Data.IORef.IORef, and that only GHC has ST and STM; but on the other hand, GHC, JHC, and UHC, with UHC reusing some of GHC’s code. Even if one were restricted to the Report, one could use basic filesystem tools to create a mutable reference store using the filesystem’s innate mutability. In either case, we will get true in-place mutation of values.
Similarly, Haskell is well-known to be unsound. The Report itself has a section describing how to do this. To demonstrate two of my favorite examples:
GHCi, version 8.6.3: http://www.haskell.org/ghc/ :? for help
Prelude> let safeCoerce = undefined :: a -> b
Prelude> :t safeCoerce
safeCoerce :: a -> b
Prelude> data Void
Prelude> let safeVoid = undefined :: Void
Prelude> :t safeVoid
safeVoid :: Void
Even if undefined were not in the Report, we can still build a witness:
Prelude> let saferCoerce x = saferCoerce x
Prelude> :t saferCoerce
saferCoerce :: t1 -> t2
I believe that this interpretation of the author’s point is in line with your cousin comment about type signatures describing the behavior of functions.
I don’t really like Haskell, but it is abusive to compare the ability to write a non-terminating function with the ability to reinterpret an existing object as if it had a completely different type. A general-purpose programming language is not a logic, and the ability to express general recursion is not a downside.
A “mutable value” would mean that a referenced value would change. That’s not the case for a value in IO. While names can be shadowed, if some other part of the code has a reference to the previous name, that value does not change.
GHCi, version 8.6.3: http://www.haskell.org/ghc/ :? for help
Prelude> :m + Data.IORef
Prelude Data.IORef> do { r <- newIORef "test"; t1 <- readIORef r; writeIORef r "another string"; t2 <- readIORef r; return (t1, t2) }
("test","another string")
The fragment readIORef r evaluates to two different actions within this scope. Either this fragment is not referentially transparent, or r is genuinely mutable. My interpretation is that the fragment is referentially transparent, and that r refers to a single mutable storage location; the same readIORef action applied to the same r results in the same IO action on the same location, but the value can be mutated.
If we have to run the program in order to discover the property, then we run afoul of Rice’s theorem. There will be cases when GHC does not print out <loop> when it enters an infinite loop.
Rice’s theorem is basically a fancier way of saying ‘Halting problem’, right?
In any case, it still doesn’t apply. You don’t need to run a program which contains undefined to have a guarantee that it will forbid unsoundness. It’s a static guarantee.
Thank you for bringing up this point. Unfortunately, “functional programming” is almost-always conflated, today, with lack of side-effects, immutability, and/or strong, static, typing. None of those are intrinsic to FP. Scheme, as you mentioned, is functional, and has none of those. In fact, the ONLY language seeing any actual use today that has all three (enforced) is Haskell. Not even Ocaml does anything to prevent side-effects.
And you absolutely can write haskell-ish OOP in e.g., Scala. Where your object methods return ReaderT-style return types. It has nothing at all to do with funcitonal vs. OOP. As long as you do inversion of control and return “monads” or closures from class methods, you can do all three of: immutable data, lack of side-effects, and strong types in an OOP language. It’s kind of ugly, but I can do that in Kotlin, Swift, Rust, probably even C++.
That’s a good point and I actually do agree completely. The issue, I think, is that most programmers today will have a hard time telling you the difference between a procedure and a function when it comes to programming. And it’s totally fair- almost every mainstream programming language calls them both “function”.
So, Scheme is “functional” in that it’s made up of things-that-almost-everyone-calls-functions. But you’re right. Most languages are made of functions and procedures, and some also have objects.
But with that definition, I don’t think Clojure counts as functional either. It’s been a couple of years, but am I not allowed to write a “function” in Clojure that takes data as input and inside the function spawns an HTTP client and orders a pizza, while returning nothing?
It would appear that only Haskell is actually a functional language if we use the more proper definition of “function”
But with that definition, I don’t think Clojure counts as functional either. It’s been a couple of years, but am I not allowed to write a “function” in Clojure that takes data as input and inside the function spawns an HTTP client and orders a pizza, while returning nothing?
Hey, the type for main in Haskell is usually IO (), or “a placeholder inside the IO monad”; using the placeholder type there isn’t mandatory, but the IO monad is. Useful programs alter the state of the world, and so do things which can’t be represented in the type system or reasoned about using types. Haskell isn’t Metamath, after all. It’s general-purpose.
The advantage of Haskell isn’t that it’s all functions. It’s that functions are possible, and the language knows when you have written a function, and can take advantage of that knowledge. Functions are possible in Scheme and Python and C, but compilers for those languages fundamentally don’t know the difference between a function and a procedure, or a subroutine, if you’re old enough. (Optimizers for those languages might, but dancing with optimizers is harder to reason about.)
That article is about Common Lisp, not Scheme. Scheme was explicitly intended to be a computational representation of lambda calculus since day 1. It’s not purely functional, yes, but still functional.
If anything that underscores the point, because lambda calculus doesn’t have side effects, while Scheme does. The argument applies to Scheme just as much as Common Lisp AFAICT.
Scheme doesn’t do anything to control side effects in the way mentioned in the original article. So actually certain styles of code in OO languages are more functional than Scheme code, because they allow you to express the presence of state and I/O in type signatures, like you would in Haskell.
That’s probably the most concise statement of the point I’ve been making in the thread …
I take it we’re going by the definition of ‘purely functional programming’ then. In that case, I don’t understand why Clojure, a similarly impure language, gets a pass. Side-effects are plentiful in Clojure.
Well I said “at least closer to it”… I would have thought Haskell is very close to pure but it appears there is some argument about that too elsewhere in the thread.
But I think those are details that distract from the main point. The main point isn’t about a specific language. It’s more about how to reason about code, regardless of language. And my response was that you can reap those same benefits of reasoning in code written in “conventional” OO languages as well as in functional languages.
That’s fair. It’s not that I disagree with the approach (I’m a big fan of referential transparency!) but I feel like this is further muddying the (already increasingly divergent) terminology surrounding ‘functional programming’. Hence why I was especially confused by the OO remarks. It doesn’t help that the article itself also begs the question of static typing.
I like your point about the amount of information in type signatures.
I agree that the type can’t contain everything interesting to know about the function.
I do think you can choose to put important information in the type. In Haskell it’s normal to produce a more limited effect system, maybe one for database effects only, and another for network effects only, and then connect those at the very top level.
So, you can put more in the type signature if you wish, and it can be directly useful to prevent mixing effects.
I’d like this rule to be expanded to writing e-mails. If the 80-column line limit doesn’t make sense in source code (I partly agree; 80 limit is small, but 120 seems appropriate), then why on earth people think that e-mails should be hard wrapped to 80 characters? Nobody else is using hard wraps when communicating, only e-mail uses it.
That’s something that can easily be fixed in your email client by enforcing a maximum width on the reading pane, e.g. max-width: 50em in CSS, or in terminals just insert newlines every 50 characters.
The problem with hard-wrap is that it will look bad for everyone with smaller screen, which is actually pretty common and not something that can easily be fixed since you can’t tell the difference between a “soft wrap” and “intended hard wrap”. I actually wrote a thing about this a few weeks ago: https://www.arp242.net/email-wrapping.html
Unfortunately quite a few email clients don’t display this very good, but that’s an almost trivial fix. Certainly easier than telling millions of people that “you’re using email wrong”.
I don’t understand this either. I recently joined a mailing list where the predominant formatting is 80-character hard wraps. As a newcomer I emulated them (when in Rome etc.) but it would make far more sense if everybody wrapped their own lines to what is comfortable. It’s prose text after all.
When somebody complains about an unstyled HTML page being too wide on their monitor, others dutifully point them to reader mode. But email is different for some reason. (Why user agents don’t take more creative licence when given a bare HTML document baffles me but is verging off-topic…)
But…. does it have staying power? If I become a Fortran programmer, will I be able to find work in 10 years time?
I kid.
I’m pretty sure Fortran is the oldest extant programming language (just a smidge older than Lisp, Algol or Cobol). And I hope that this helps make it more accessible to newer folks and helps the Fortran community(!?) take advantage of other improvements that other languages/paradigms have developed.
If I become a Fortran programmer, will I be able to find work in 10 years time?
Honestly? Yes. Even today, it still widespread use in the industry, particularly in numeric computing. Although I hope you’ll be content with maintaining legacy code.
I’m pretty sure Fortran is the oldest extant programming language (just a smidge older than Lisp, Algol or Cobol).
Fun fact: A predecessor of Lisp was an extension of Fortran (called FLPL). The other predecessor was IPL. Similarly, a predecessor of Cobol was Comtran, again, a Fortran extension. In addition, Algol was explicitly designed with a goal of covering shortcomings of Fortran in mind. I think it’d be an understatement to say Fortran was extremely influential. :-)
And I hope that this helps make it more accessible to newer folks and helps the Fortran community(!?) take advantage of other improvements that other languages/paradigms have developed.
That’s exactly what happened. Fortran (as a language) evolved a lot, constantly drawing new features from existing programming languages. Honestly, Fortran IV doesn’t even resemble Fortran 95. I’d say Lisp is the only other language that evolved this much but, unlike Fortran, it doesn’t have a single canonical standard.
The float behaviour builds on LaTeX’ goal to have a harmonic typesetting result. Not to go to deep, you can give one or more of the “intents” “h” (here), “t” (top of page), “b” (bottom of page), “p” (full page) to a figure, which gives LaTeX the freedom to place the figure at the given positions. Using the intent “H” or even “H!” makes sense in some cases, but really is a misuse of the floating environment.
The idea is that you often end up with multiple figures in one place. LaTeX really watches out that figures don’t float too far away and even considers things when you have a multipage-document, such that e.g. your text is on page N and your two figures are on page N+1. It looks horrible in the editor, but if you look at the final printed out version you realize the motivation. But even if you don’t print it: Using this heuristic prevents too many figures from being in one place. What if you add more content at the front and it shifts half a page? Manually adjusting floats is almost impossible. With floats you just don’t have to worry about that.
I always use [htbp] for my figures for that reason.
Not GP but having also written a thesis in LaTeX, it’s pretty sensible. You place your \begin{figure} where it’s relevant and the image will end up positioned probably at the top or bottom of the page where you put that directive. If you have a lot of figures it will set aside entire pages for figures. Alternatively you can override it with various placement specifiers (including “put it right here”).
This avoids the two main problems you get with word processors like MS Word or LibreOffice Writer:
If you attach your image to text position you can get strange outcomes like a page with a one line of text, a figure, then the rest of the text below it
If you attach your image to page position it can end up several pages away from where it’s actually referenced
Does it put images literally everywhere except the one place I want them to go?
What do you mean? LaTeX puts images exactly where you put them, always. An image is just like a big character and it strictly follows the flow of your text. Unless you explicitly request a floating environment, LaTeX will never put an image anywhere else as it appears in your source document.
I just watched the video tutorial and the UX seems amazing. The mouse and keyboard interactions mesh together dwimmerily. It’s a perfect fit for the gap between ImageMagick and GIMP in my image editing workflow.
Honing my CV and scouring LinkedIn for jobs in mainland Europe. It’s a soul-sucking experience but, hey, what else can you do? Other than that, I’m planning to work on my guide to Scheme macros, and trying out JavaFX with Clojure.
As you said, it entirely depends on the type of project. Some languages are more apt for specific tasks than others. But judging by the ‘commercial’ part, I think I’d go with Clojure or Elixir if I’m writing something meant to handle heavy burden. If it’s a smaller task, say, I need to hack together a development tool, I’d probably use Racket or just Scheme.
The standard library is fairly rich and extensive. 90% of the time, there’s a module that covers your use-case with a well-documented high-level API. You can just skip the boilerplate and use the DSL exposed to you, and even extend and customise it with Racket’s powerful metaprogramming tools if it doesn’t exactly fit your needs.
I use Dvorak and this is why I love the layout-agnostic keybindings of Emacs. I could never get used to evil-mode or anything. I have ersatz Emacsen (like mg) wherever I can’t have GNU Emacs just so that I can continue to use the familiar keybindings.
By the way, there’s a plugin for Dvorak keybindings in vim.
You should know that you can use these skins with Audacious.
Is it actually improve performance in scheme? I would guess loop unrolling easy to implement. Does
quasisyntax
survive r7rs?Only if the compiler has a strategy to precompute arithmetic operations at compile-time. Even then, there are much better optimisations you can do without sacrificing run-time flexibility (check out the link to
math/number-theory
at the bottom).And no, R7RS Small only mandates
syntax-rules
. Implementations independently advocate forsyntax-case
, explicit/implicit renaming and syntactic closures. But yes, it’s pretty much trivial to unroll loops with a macro system that allows you to break hygiene.I like these scope-coloured braces. How are they achieved here?
I don’t know how the author formatted them, but vim has rainbow parentheses, and emacs has rainbow delimiters. (The vim version also works on arbitrary delimiters.)
It’s a quick and dirty script I put together here.
It’s a feature preview. Sounds like you’re in the B group. I opted into it. I believe you can disable it or send feedback about it.
Edit: Seems like they rolled it out for everyone today. The options are gone from the feature preview menu.
I literally just found the menu last night and enabled it… 😹
Worth reading to the end just for the totally evil code snippet.
It was kind of foreshadowed to be evil when the author named it “skynet.c” I guess.
Reminds me of the Java-code we used to see around 2000.
With a RuntimeException try-catch at the top and then just print it and continue like nothing happened.
How much bad bugs, data corruption and weirdness did that practice cause?
How is that any different from kubernetes and “just restart it”? Its mostly the same practice ultimately, though with a bit more cleanup between failures.
I guess it depends on whether you keep any app state in memory. If you’re just funnelling data to a database maybe not much difference.
Now I start to wonder, how the correct code should look like (as opposed of jumping 10 bytes ahead).
Read DWARF to figure out next instruction?
Embed a decompiler to decode the faulty opcode length?
Increment the instruction pointer until you end up at a valid instruction (i.e., you don’t get SIGILL), of course ;)
I have code that does this by catching SIGILL too and bumping the instruction pointer along in response to that. https://github.com/RichardBarrell/snippets/blob/master/no_crash_kthxbai.c
Brilliant. I’m simultaneously horrified and amused.
That’d be a pretty great nerdcore MC name.
If you want to skip the offending instruction, à la Visual Basics “on error resume next”, you determine instruction length by looking at the code and then increment by that.
Figuring out the length requires understanding all the valid instruction formats for your CPU architecture. For some it’s almost trivial, say AVR has 16 bit instructions with very few exceptions for stuff like absolute call. For others, like x86, you need to have a fair bit of logic.
I am aware that the “just increment by 1” below are intended as a joke. However I still think it’s instructive to say that incrementing blindly might lead you to start decoding at some point in the middle of an instruction. This might still be a valid instruction, especially for dense instruction set encodings. In fact, jumping into the middle of operands was sometimes used on early microcomputers to achieve compact code.
Here’s a more correct approach: https://git.saucisseroyale.cc/emersion/c-safe
Just don’t compile it with -pg :)
I wonder why he went straight for the lex/parse strategy when S-expressions are so simple that all you need is one character lookahead.
I am not sure what the question is. This implementation in fact does not read more than one character ahead, since as you pointed out, it is unnecessary. In particular, it does not read entire file to memory at once.
This is starting to look like bullying. I think the post is fine, but posting it here to point and gawk isn’t. :(
If someone makes bold claims about a project, then there’s nothing wrong with pointing out when those claims aren’t accurate.
If we had a post for every software project that was still shit in spite of bold claims, the frontpage would be worthless.
It’s fair to say we don’t need a post for every piece of software that isn’t living up to its claims, but that doesn’t make this bullying.
I think there are two differences here:
What toxicity are you talking about?
If V is vaporware then how are so many of us using it for our projects? https://github.com/vlang/v
Here’s a cool one: a first-person shooter game bot https://github.com/EasyHax/Vack
You, in this thread, right now.
This is the only time I’m going to reply to you. And I only replied because nobody else has explicitly called out your hostility.
Defending V from misleading attacks – so misleading that the author has just retracted her original post – is not exactly a defensible definition of “toxic”.
I don’t like seeing slander against important projects I care about. Surely you can understand!
All I can see here is that the V community lives up to its infamy by bullying someone into taking down a critical piece on it.
I retracted the post because of people like you. It wasn’t just you, but I just wanted an end to it.
If your post would have been factual then I wouldn’t have criticized it…
I hope you are in a better headspace for your future posts. I’m sure many of us would love to read more articles about WasmCloud, which is honestly one of the coolest projects I’ve ever heard of; would solve many important problems at once, and do so without reinventing the wheel.
(Dear readers: FWIW I didn’t ask for the post to be taken down. I was arguing against its content.)
Did I miss some drama about this project?
Yes.
Which claims aren’t accurate, specifically?
As far as I can tell, compiling 1.2 million lines of code in a second (second bullet point). I would also like to see some citations backing up the safety guarantees with respect to C to V translation. C has way too many gotchas for a bold claim like that. Also, the safety guarantee with the C and Javascript backends.
You can download the project, generate 1 million lines of code using tools/gen_1mil.v and build it in 1 second with
v -x64 x.v
“ safety guarantee with the C backend” makes no sense, because you write the code in V, not in C. V won’t allow you globals, or shadowing etc. C can be thought of as an assembly language.
If only this benchmark actually worked. First I compiled the generator using:
This already takes 1.3 seconds. I then ran the resulting binary as follows:
The size of the resulting file is 7.5 megabytes. I then tried to compile it as follows:
This takes 2.29 seconds, produces 7 errors, and no output binary. This is the same when using the
-64
option, and also when leaving out the-prod
option.Even a simple hello world takes longer than a second in production mode:
In debug mode it already takes 600 milliseconds:
With
-x64
a debug build takes about 2 milliseconds.Based on the above I find it hard to believe V would really be able to compile over a million lines of code in less than one second; even without optimisations enabled. I hope I am wrong here.
The hardware used is as follows:
maybe, but as icefox said I also feel like christine is giving V too much PR with it
I agree that it’s unnecessary, though I can’t decide if it’s really bullying.
I’ve heard about V two times since the last post hit lobste.rs. One time I posted Christine’s observations, one time someone else did. I think the message is out there, and at this point, it’s really only noteworthy if something changes.
What message is out there? That the misleading attacks on V continue?
Yeah there is some interesting technical content in the post, but the tone is offputting.
I was amused to see it tagged “performance”, wonder if the pun was intentional on the submitter’s part.
Abuse that gets a lot of “positive engagement” is deemed entertainment.
I received a dead tree copy of SICP for my birthday, half of which I had already read as an e-book, so I’ll be reading that for a while. Looking for a job during the COVID recession is making things tough but I’m trying to consider it an opportunity to go back to working on my silly side-projects. My newest side-project is rewriting the infamous Snoopy calendar Fortran program in Scheme, but with a twist — to give it a feeling of nostalgia, it calculates a year in the past century with an identical calendar set up as the current year and displays that. (eg It displays the calendar of 1964 in 2020.)
I hear CVS is hiring
Isn’t there a difference between functional code and side-effect-free code? I feel like, by trying to set up all of the definitions just right, this article actually misses the point somewhat. I am not even sure which language the author is thinking of; Scheme doesn’t have any of the three mentioned properties of immutability, referential transparency, or static type systems, and neither do Python nor Haskell qualify. Scouring the author’s websites, I found some fragments of Java; neither Java nor Clojure have all three properties. Ironically, Java comes closest, since Java is statically typed in a useful practical way which has implications for soundness.
These sorts of attempts to define “functional programming” or “functional code” always fall flat because they are trying to reverse-engineer a particular reverence for some specific language, usually an ML or a Lisp, onto some sort of universal principles for high-quality code. The idea is that, surely, nobody can write bad code in such a great language. Of course, though, bad code is possible in every language. Indeed, almost all programs are bad, for almost any definition of badness which follows Sturgeon’s Law.
There is an important idea lurking here, though. Readability is connected to the ability to audit code and determine what it cannot do. We might desire a sort of honesty in our code, where the code cannot easily hide effects but must declare them explicitly. Since one cannot have a decidable, sound, and complete type system for Turing-complete languages, one cannot actually put every interesting property into the type system. (This is yet another version of Rice’s theorem.) Putting these two ideas together, we might conclude that while types are helpful to readability, they cannot be the entire answer of how to determine which effects a particular segment of code might have.
Edit: Inserted the single word “qualify” to the first paragraph. On rereading, it was unacceptably ambiguous before, and led to at least two comments in clarification.
Just confirming what you said: Did you say that Haskell doesn’t have immutability, referential transparency, or a static type system?
I will clarify the point, since it might not be obvious to folks who don’t know Haskell well. The original author claims that two of the three properties of immutability, referential transparency, and “typing” are required to experience the “good stuff” of functional programming. On that third property, the author hints that they are thinking of inferred static type systems equipped with some sort of proof of soundness and correctness.
Haskell is referentially transparent, but has mutable values and an unsound type system. That is only one of three, and so Haskell is disqualified.
Mutable values are provided in not just IO, but also in ST and STM. On one hand, I will readily admit that the Haskell Report does not mandate
Data.IORef.IORef
, and that only GHC has ST and STM; but on the other hand, GHC, JHC, and UHC, with UHC reusing some of GHC’s code. Even if one were restricted to the Report, one could use basic filesystem tools to create a mutable reference store using the filesystem’s innate mutability. In either case, we will get true in-place mutation of values.Similarly, Haskell is well-known to be unsound. The Report itself has a section describing how to do this. To demonstrate two of my favorite examples:
Even if
undefined
were not in the Report, we can still build a witness:I believe that this interpretation of the author’s point is in line with your cousin comment about type signatures describing the behavior of functions.
I don’t really like Haskell, but it is abusive to compare the ability to write a non-terminating function with the ability to reinterpret an existing object as if it had a completely different type. A general-purpose programming language is not a logic, and the ability to express general recursion is not a downside.
A “mutable value” would mean that a referenced value would change. That’s not the case for a value in IO. While names can be shadowed, if some other part of the code has a reference to the previous name, that value does not change.
Consider the following snippet:
The fragment
readIORef r
evaluates to two different actions within this scope. Either this fragment is not referentially transparent, orr
is genuinely mutable. My interpretation is that the fragment is referentially transparent, and thatr
refers to a single mutable storage location; the samereadIORef
action applied to the samer
results in the same IO action on the same location, but the value can be mutated.The value has been replaced with another. It is not quite the same thing as mutating the value itself.
From your link:
That means that soundness is preserved–a program can’t continue running if its runtime types are different from its compile-time types.
If we have to run the program in order to discover the property, then we run afoul of Rice’s theorem. There will be cases when GHC does not print out
<loop>
when it enters an infinite loop.Rice’s theorem is basically a fancier way of saying ‘Halting problem’, right?
In any case, it still doesn’t apply. You don’t need to run a program which contains
undefined
to have a guarantee that it will forbid unsoundness. It’s a static guarantee.Thank you for bringing up this point. Unfortunately, “functional programming” is almost-always conflated, today, with lack of side-effects, immutability, and/or strong, static, typing. None of those are intrinsic to FP. Scheme, as you mentioned, is functional, and has none of those. In fact, the ONLY language seeing any actual use today that has all three (enforced) is Haskell. Not even Ocaml does anything to prevent side-effects.
And you absolutely can write haskell-ish OOP in e.g., Scala. Where your object methods return ReaderT-style return types. It has nothing at all to do with funcitonal vs. OOP. As long as you do inversion of control and return “monads” or closures from class methods, you can do all three of: immutable data, lack of side-effects, and strong types in an OOP language. It’s kind of ugly, but I can do that in Kotlin, Swift, Rust, probably even C++.
Why is Scheme functional? It’s clearly not made of functions:
Lisp Is Not Functional
I would say Haskell and Clojure are functional, or at least closer to it, but Scheme isn’t. This isn’t a small distinction…
That’s a good point and I actually do agree completely. The issue, I think, is that most programmers today will have a hard time telling you the difference between a procedure and a function when it comes to programming. And it’s totally fair- almost every mainstream programming language calls them both “function”.
So, Scheme is “functional” in that it’s made up of things-that-almost-everyone-calls-functions. But you’re right. Most languages are made of functions and procedures, and some also have objects.
But with that definition, I don’t think Clojure counts as functional either. It’s been a couple of years, but am I not allowed to write a “function” in Clojure that takes data as input and inside the function spawns an HTTP client and orders a pizza, while returning nothing?
It would appear that only Haskell is actually a functional language if we use the more proper definition of “function”
Hey, the type for
main
in Haskell is usuallyIO ()
, or “a placeholder inside theIO
monad”; using the placeholder type there isn’t mandatory, but theIO
monad is. Useful programs alter the state of the world, and so do things which can’t be represented in the type system or reasoned about using types. Haskell isn’t Metamath, after all. It’s general-purpose.The advantage of Haskell isn’t that it’s all functions. It’s that functions are possible, and the language knows when you have written a function, and can take advantage of that knowledge. Functions are possible in Scheme and Python and C, but compilers for those languages fundamentally don’t know the difference between a function and a procedure, or a subroutine, if you’re old enough. (Optimizers for those languages might, but dancing with optimizers is harder to reason about.)
That article is about Common Lisp, not Scheme. Scheme was explicitly intended to be a computational representation of lambda calculus since day 1. It’s not purely functional, yes, but still functional.
If anything that underscores the point, because lambda calculus doesn’t have side effects, while Scheme does. The argument applies to Scheme just as much as Common Lisp AFAICT.
Scheme doesn’t do anything to control side effects in the way mentioned in the original article. So actually certain styles of code in OO languages are more functional than Scheme code, because they allow you to express the presence of state and I/O in type signatures, like you would in Haskell.
That’s probably the most concise statement of the point I’ve been making in the thread …
I take it we’re going by the definition of ‘purely functional programming’ then. In that case, I don’t understand why Clojure, a similarly impure language, gets a pass. Side-effects are plentiful in Clojure.
Well I said “at least closer to it”… I would have thought Haskell is very close to pure but it appears there is some argument about that too elsewhere in the thread.
But I think those are details that distract from the main point. The main point isn’t about a specific language. It’s more about how to reason about code, regardless of language. And my response was that you can reap those same benefits of reasoning in code written in “conventional” OO languages as well as in functional languages.
That’s fair. It’s not that I disagree with the approach (I’m a big fan of referential transparency!) but I feel like this is further muddying the (already increasingly divergent) terminology surrounding ‘functional programming’. Hence why I was especially confused by the OO remarks. It doesn’t help that the article itself also begs the question of static typing.
It depends on who you ask to. :)
You may be interested in the famous Van Roy’s organization of programming paradigms: https://www.info.ucl.ac.be/~pvr/VanRoyChapter.pdf. Original graphical summary: https://continuousdevelopment.files.wordpress.com/2010/02/paradigms.jpg, revised summary: https://upload.wikimedia.org/wikipedia/commons/f/f7/Programming_paradigms.svg.
(reposted from https://lobste.rs/s/aw4fem/unreasonable_effectiveness#c_ir0mnq)
I like your point about the amount of information in type signatures.
I agree that the type can’t contain everything interesting to know about the function.
I do think you can choose to put important information in the type. In Haskell it’s normal to produce a more limited effect system, maybe one for database effects only, and another for network effects only, and then connect those at the very top level.
So, you can put more in the type signature if you wish, and it can be directly useful to prevent mixing effects.
Shoutout to @dmbaturin, the author of the series.
I’d like this rule to be expanded to writing e-mails. If the 80-column line limit doesn’t make sense in source code (I partly agree; 80 limit is small, but 120 seems appropriate), then why on earth people think that e-mails should be hard wrapped to 80 characters? Nobody else is using hard wraps when communicating, only e-mail uses it.
I find a consistent column of text easier to read than sentences that span the entirety of my monitor.
That’s something that can easily be fixed in your email client by enforcing a maximum width on the reading pane, e.g.
max-width: 50em
in CSS, or in terminals just insert newlines every 50 characters.The problem with hard-wrap is that it will look bad for everyone with smaller screen, which is actually pretty common and not something that can easily be fixed since you can’t tell the difference between a “soft wrap” and “intended hard wrap”. I actually wrote a thing about this a few weeks ago: https://www.arp242.net/email-wrapping.html
Unfortunately quite a few email clients don’t display this very good, but that’s an almost trivial fix. Certainly easier than telling millions of people that “you’re using email wrong”.
So set up soft wrapping in your email client then. You will then have what you like, and others will have what they like.
I don’t understand this either. I recently joined a mailing list where the predominant formatting is 80-character hard wraps. As a newcomer I emulated them (when in Rome etc.) but it would make far more sense if everybody wrapped their own lines to what is comfortable. It’s prose text after all.
When somebody complains about an unstyled HTML page being too wide on their monitor, others dutifully point them to reader mode. But email is different for some reason. (Why user agents don’t take more creative licence when given a bare HTML document baffles me but is verging off-topic…)
No, only some communities demand their members to use email this way. If not 72 columns… :S
But…. does it have staying power? If I become a Fortran programmer, will I be able to find work in 10 years time?
I kid.
I’m pretty sure Fortran is the oldest extant programming language (just a smidge older than Lisp, Algol or Cobol). And I hope that this helps make it more accessible to newer folks and helps the Fortran community(!?) take advantage of other improvements that other languages/paradigms have developed.
Honestly? Yes. Even today, it still widespread use in the industry, particularly in numeric computing. Although I hope you’ll be content with maintaining legacy code.
Fun fact: A predecessor of Lisp was an extension of Fortran (called FLPL). The other predecessor was IPL. Similarly, a predecessor of Cobol was Comtran, again, a Fortran extension. In addition, Algol was explicitly designed with a goal of covering shortcomings of Fortran in mind. I think it’d be an understatement to say Fortran was extremely influential. :-)
That’s exactly what happened. Fortran (as a language) evolved a lot, constantly drawing new features from existing programming languages. Honestly, Fortran IV doesn’t even resemble Fortran 95. I’d say Lisp is the only other language that evolved this much but, unlike Fortran, it doesn’t have a single canonical standard.
Halfway through your comment I was intending to link this, but it’s yours! It’s a great article.
Thank you!
Very impressive. Does it put images literally everywhere except the one place I want them to go?
Admittedly, I never understood this behaviour until I wrote a thesis in LaTeX. Then the figure-floating-behaviour makes total sense.
Could you please elaborate more on this?
The float behaviour builds on LaTeX’ goal to have a harmonic typesetting result. Not to go to deep, you can give one or more of the “intents” “h” (here), “t” (top of page), “b” (bottom of page), “p” (full page) to a figure, which gives LaTeX the freedom to place the figure at the given positions. Using the intent “H” or even “H!” makes sense in some cases, but really is a misuse of the floating environment.
The idea is that you often end up with multiple figures in one place. LaTeX really watches out that figures don’t float too far away and even considers things when you have a multipage-document, such that e.g. your text is on page N and your two figures are on page N+1. It looks horrible in the editor, but if you look at the final printed out version you realize the motivation. But even if you don’t print it: Using this heuristic prevents too many figures from being in one place. What if you add more content at the front and it shifts half a page? Manually adjusting floats is almost impossible. With floats you just don’t have to worry about that.
I always use [htbp] for my figures for that reason.
Oh god thesis writing flashbacks…
Not GP but having also written a thesis in LaTeX, it’s pretty sensible. You place your
\begin{figure}
where it’s relevant and the image will end up positioned probably at the top or bottom of the page where you put that directive. If you have a lot of figures it will set aside entire pages for figures. Alternatively you can override it with various placement specifiers (including “put it right here”).This avoids the two main problems you get with word processors like MS Word or LibreOffice Writer:
What do you mean? LaTeX puts images exactly where you put them, always. An image is just like a big character and it strictly follows the flow of your text. Unless you explicitly request a floating environment, LaTeX will never put an image anywhere else as it appears in your source document.
I just watched the video tutorial and the UX seems amazing. The mouse and keyboard interactions mesh together dwimmerily. It’s a perfect fit for the gap between ImageMagick and GIMP in my image editing workflow.
I’m really glad someone finally took the steps necessary to revive DOSBox.
Honing my CV and scouring LinkedIn for jobs in mainland Europe. It’s a soul-sucking experience but, hey, what else can you do? Other than that, I’m planning to work on my guide to Scheme macros, and trying out JavaFX with Clojure.
Best of luck with the job hunting!
As you said, it entirely depends on the type of project. Some languages are more apt for specific tasks than others. But judging by the ‘commercial’ part, I think I’d go with Clojure or Elixir if I’m writing something meant to handle heavy burden. If it’s a smaller task, say, I need to hack together a development tool, I’d probably use Racket or just Scheme.
I’m curious, how would you describe to a non-Racket user how Racket’s good for making dev tools?
The standard library is fairly rich and extensive. 90% of the time, there’s a module that covers your use-case with a well-documented high-level API. You can just skip the boilerplate and use the DSL exposed to you, and even extend and customise it with Racket’s powerful metaprogramming tools if it doesn’t exactly fit your needs.
Speaking of lisp and chinese cartoons, SICP has been a meme on /g/ for years. GOOG has many more.
There are days I wonder how many people got their start in programming through that.
https://github.com/laynH/Anime-Girls-Holding-Programming-Books
I got my pdf of The C Programming Language from one of those images.
That meme gave birth to textboard.org, an anonymous bulletin board in MIT/GNU Scheme.
Don’t let your memes be dreams, gentooman.
In my defence, I haven’t been on /g/ (nor any other part of that website) since 2012 and I’ve been a Scheme hacker since only 2017.
Ironically, textboard is hosted on Gentoo Linux.
(I am just making this up)
I knew learning Japanese would come in handy one day.