Sep. 22nd, 2014

tim: A person with multicolored hair holding a sign that says "Binaries Are For Computers" with rainbow-colored letters (binaries)
These notes are about Tuesday, September 2.

I caught the end of Robby Findler's invited talk on behavioral software contracts. That was enough to catch a point that I found thought-provoking: that contracts aren't a subset of types, because contracts can express protocol-based properties (similarly to how session types do), which fundamentally involve assignment. I'm still mulling it over, and I should probably just watch the whole talk, but it might be the answer to a question that has plagued me for years, which is: "are contracts just type signatures that you put in a comment?" (Not meaning to participate in a holy war here -- I assume the problem is my lack of understanding.)

If that's true, it reminds me of typestate in Rust, which I implemented for my intern project and which was later removed from the language. Or, maybe, Rust's typestate wasn't as powerful as contracts are, and that's why people didn't find it useful in practice. I do remember always being really confused about the interaction between typestate and assignment -- we went back and forth between thinking that typestate predicates should only be able to refer to immutable variables, and thinking that we'd take the YOLO approach and leave it as a proof obligation for the user that mutation can't cause unsoundness. So maybe if I had understood contracts at the time, the whole thing would have gone better. In any case, I'd like to read more so that I can articulate the difference between typestate and contracts.

I caught slightly more of David Van Horn's talk on soft contract verification, though I missed part of that talk too. The principle here is to allow assigning blame when an assertion fails at runtime: then, you can write your code so as to have strong enough contracts so that your code is to blame as infrequently as possible when something goes wrong (if I understood correctly, anyway). ("Blame" is a technical term introduced by Dimoulas, Findler, Flanagan, and Felleisen, at least if I have the correct initial reference.) As in soft typing, "soft" here means that the contract checker never rejects a program -- it just introduces runtime checks when it can't convince itself of a particular safety property at compile time. This also recalls Rust typestate for me, which had a very similar approach of falling back to runtime verification (actually, in Rust, all typestate assertions were verified at runtime; we thought that would be a simpler approach, and if the feature had persisted, we might have implemented some sort of analysis pass to eliminate some of the dynamic checks). In my copious free time, I'd love to revisit Rust typestate and compare and contrast it with the work presented in these two talks, as well as gradual typing and effect systems, maybe even as a paper or experience report. (Which, of course, would involve me learning about all of those things.) I want to say that Rust typestate did have an analogous notion to blame: it was all about forcing each function to declare its precondition, so that if that precondition was violated at runtime, we knew it was the caller's fault, not the callee's. But I'd like to read the paper to see how much of a role inference plays.

As a much more trivial aside, I really liked that Van Horn used ⚖ as an operator, at least in the slides (as in, C ⚖ M). There should be more Unicode operators in papers! It's 2014; we don't need to limit ourselves to what was built into a 1990s-era version of LaTeX anymore.

In any case, the parts of Van Horn's and Findler's talks I heard made me think "this is the right way to do what we were trying to do with typestate". I want to be sure I believe that, though. I say this because their approach to handling mutation is to statically check any contracts that don't involve assignment -- other contracts revert to runtime checks, but the checks always happen, either statically or dynamically. My memory is hazy, but in the context of Rust, I think we talked about introducing additional precondition checks at each use of a variable involved in a typestate predicate, but quickly decided that would be inefficient. In any case, those two talks made me want to revisit that work, for the first time in a while!

I missed most of Norman Ramsey's talk "On Teaching How to Design Programs as well, but the paper seems worth reading too. Two things I did catch: Norman saying "Purity has all sorts of wonderful effects" (I think in terms of helping students discipline their thinking and avoid just banging on the keyboard until something works, though I don't recall the context), and him making the point that the HTDP approach makes it easier to grade assignments based on how systematic the student's design is, rather than a laundry list of point deductions.

Next, I went to Richard Eisenberg's talk "Safe Zero-Cost Coercions for Haskell". I have painful memories of this line of work dating back to 2007 and 2008, when I was reviving the GHC External Core front-end and had to figure out how to adapt External Core to represent the new System FC type system features, which (to me at the time) seemed to make the Core type system twice as complicated for unclear benefit. (External Core was removed from GHC some years later, alas.) I'm willing to say at least provisionally, though, that the work Eisenberg presented cleans up the coercion story quite a bit. I also appreciated the motivation he gave for introducing coercions into the type system at all, which I hadn't heard formulated quite like this before: you can typecheck System F just using algebraic reasoning, but when you want to introduce coercions (which you do because of GADTs and newtypes), contravariance ruins everything. I think a way to summarize the problem is that you get overlapping instances, only with type families rather than just type classes.

To solve the problem, Eisenberg and colleagues introduce two different equality relations: nominal ~, and structural ~~. This allows the type system to incorporate coercions based both on nominal type equality, and structural type equality, without having to pick just one. Then, each type parameter gets a "role", which can be either "structural" or "nominal". This allows coercion kinds (my nemesis from the External Core days) to just go away -- although to me, it seems like rather than actually taking coercions out of the kind system, this approach just introduces a second kind system that's orthogonal to the traditional one (albeit a very simple kind system). I guess it's possible that separating out concerns into two different kind systems makes the implementation and/or reasoning simpler; also, as far as I can tell, roles are more restricted than kinds in that there's no role polymorphism. (I'm not sure if there's kind polymorphism, either, although there certainly was in GHC at least at some point.) Actually, there are three roles: "nominal" (meaning "this parameter name matters and is not interchangeable with structurally equal types"), "representational" (for a type that is interchangeable with any others that are structurally equal to it), and "phantom" (type parameters that are unused on the right-hand side of a definition). I wrote in my notes "I wonder if this sheds any light on Rust traits", but right now I'm not going to elaborate on that query!

The implications of the work are that generalized newtype deriving now has a safe implementation; the type system makes it possible to only allow unwrapping when the newtype constructor is in scope. (This, too, reminds me of a Rust bug that persisted for a while having to do with "newtype dereferencing".) The results were that the new role system uncovered three legitimate bugs in libraries on Hackage, so that's pretty cool. Also, Phil Wadler asked a question at the end that began with something like, "...here's how Miranda did it..." (Not something one hears a lot!)

Finally, I stayed for François Pottier's talk "Hindley-Milner Elaboration in Applicative Style", which I understood more than I expected to! He began by saying something that I noticed long ago, but originally chalked up to my own stupidity: Algorithm W in its original presentation, was "imperative, and messy". We want a simpler, more declarative formulation of type inference. Pottier claims that conjunctions and equations are simpler than compositions and substitutions -- I agree, but I'm not sure if that's based on something objective or if that's just what works well for my brain. He defines a constraint language that looks like λ-calculus with existential types, which allows constraint solving to be specified based on rewriting. On paper, it's a declarative specification, but the implementation of it is still imperative (for performance reasons). It sounds like it might be fun to prove that the imperative implementation implements the declarative specification, though perhaps he is already doing that!

Stay tuned for my notes on day 3, when I get around to editing them.

Profile

tim: Tim with short hair, smiling, wearing a black jacket over a white T-shirt (Default)
Tim Chevalier

November 2021

S M T W T F S
 123456
78 910111213
14151617181920
21222324252627
282930    

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags