-
Notifications
You must be signed in to change notification settings - Fork 273
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Constraint based arrow notation #303
base: master
Are you sure you want to change the base?
Constraint based arrow notation #303
Conversation
FWIW, when we tried to use the arrow notation commercially we ended up using a separate desugarer - the GHC version just seemed too broken. @pepeiborra did the work and wrote our replacement, so can comment as to what was broken. |
The desugaring post 7.8 was accumulating bindings in a giant tuple and not releasing them after they went out of scope, causing space leaks. I simply repackaged the original algorithm by Ross Patterson into a quasiquoter. @lexi-lambda seems to be fixing it in a much better way here, which is awesome. |
IIUC, the main goal of |
I think it might be beneficial to show some user code demonstrating why this desugaring is beneficial. |
@ocharles I’ve pushed a change that includes two examples of real world code helped by this proposal, and it works through the proposed typechecking process for one of them in gory detail. Let me know if that helps. |
I can't even get the example to type check using the type for {-# LANGUAGE Arrows #-}
{-# LANGUAGE MultiParamTypeClasses #-}
{-# LANGUAGE FunctionalDependencies #-}
{-# LANGUAGE FlexibleInstances #-}
{-# LANGUAGE UndecidableInstances #-}
import Control.Arrow
import Control.Category
newtype ReaderA r arr a b = ReaderA { runReaderA :: arr (a, r) b }
lift :: Arrow arr => arr a b -> ReaderA r arr a b
lift a = ReaderA (arr fst >>> a)
class Arrow arr => ArrowError e arr | arr -> e where
throw :: arr e a
handle :: arr (a, ()) b -> arr (a, (e, ())) b -> arr (a, ()) b
instance Category arr => Category (ReaderA r arr) where
instance Arrow arr => Arrow (ReaderA r arr) where
instance ArrowError e arr => ArrowError e (ReaderA r arr) where
throw = lift throw
handle (ReaderA f) (ReaderA g) = ReaderA $ proc (a, r) ->
(f -< (a, r)) `handle` \e -> g -< ((a, e), r) |
Your implementation of instance ArrowError e arr => ArrowError e (ReaderA r arr) where
throw = lift throw
handle (ReaderA f) (ReaderA g) = ReaderA $ proc ((a, ()), r) ->
(f -< ((a, ()), r)) `handle` \e -> g -< ((a, (e, ())), r) Those |
Aha, thanks. I got the impression that only the type of the control operators needed to change after 7.8, but the usage of control operators needed to change too. I think it's very important to make that clear in the proposal. Something like
handle :: ArrowError e arr => arr a b -> arr (a, e) b -> arr a b
handle :: ArrowError e arr => arr (a, ()) b -> arr (a, (e, ())) b -> arr (a, ()) b
(f -< (a, r)) `handle` \e -> g -< ((a, e), r)
(f -< ((a, ()), r)) `handle` \e -> g -< ((a, (e, ())), r) |
A good point—I’ll try to work that in more explicitly. It might be worth saying, however, that the issue isn’t quite as bad as that instance might make it appear. The changes at the term level only show up when implementing control operators, and that When using control operators (and obediently remaining entirely within the proc (foo, bar) -> (launchTheMissiles -< foo) `handle` \e -> recover -< (e, bar) then it would not have had to change from GHC 7.6 to 7.8. In practice, I find I end up writing a lot of control operators myself—many of which simply wrap existing control operators like |
I see. That makes it clearer why you left it out of the original proposal. It also makes me more confused about how this desugaring works at all. I haven't read it through in detail. Now I'm guessing there must be two cases: one where a desugaring of applied arguments to |
I'm confused about the |
Another way of reasoning about what I said in my previous comment is to consider the relationship between arrows, functions on arrows, and commands. Commands are the building block of Atomic commands are created by “lifting” an arrow into a command—in effect, this is what This hints at a missing feature of In any case, the When living completely in the DSL, as in the example I gave in my previous comment, the calling convention isn’t exposed because you are working exclusively with commands. Even though When you want to implement a new function on commands, however, you fundamentally must think about the calling convention, since you are straddling the boundary between arrows and commands. This hints at a second missing feature of This “arrow/command” duality is an extraordinarily leaky abstraction, and it makes \foo bar -> launchTheMissiles foo `catchError` \e -> recover e bar into \foo bar -> do
let handler e = recover e bar
launchTheMissiles foo `catchError` handler or even \foo bar -> launchTheMissiles foo `catchError` flip recover bar There is no equivalent capability with commands. It would be wonderful to fix that problem, but that would be a much, much larger proposal, and it might amount to a new |
To be honest, I am confused about this, too! In the original (pre-7.8) implementation, and in my proposal, applying an arrow to an argument stack of length 1 simply Does The Right Thing due to the way the calling convention is defined. In the original implementation, the argument stack takes the shape In the current implementation, however, this property clearly does not hold. The argument stack has the shape
That is a mistake in the proposal; good catch. I previously used a different type for |
Do you mean |
proc x -> (f -< x) `handle` \e -> g -< e That |
Oh I see. |
Yes, that’s right. Type and Translation Rules for Arrow Notation in GHC is a better reference for the pre-7.8 syntax as implemented in GHC than the original paper. |
I think you and I (and any other interested parties) should find a place to have a discussion about addressing the points in your longer comment, for example, by trying to turn commands into real things by making ⇀ a real thing (perhaps a type family). |
Sure, I agree that would be valuable! I also agree that this thread is probably not the place to have it. Do you have any suggestions for a venue? Perhaps a haskell-cafe thread? Or is too deep in the weeds for that? |
Maybe we can just start an |
Works for me. Feel free to create one and link me to it, and we can continue the conversation there. |
Anyone interested in discussing next-generation Arrow notation in general can join in at https://github.com/tomjaguarpaw/Arrows2 and we can leave this PR discussion for the particular proposal at hand. |
As it happens, I’ve figured out the answer to this conundrum: the current implementation just ignores the argument stack entirely in the typing rules for |
Expanded on my previous comment's edit, you already address this a bit with:
I totally agree with that if we must use one
No breakage or implicit
Here, the stacks are concatenated, so one can do As a final second bonus circling back to my previous comment, this also helps the eta-contraction problem @lexi-lambda was talking about above, with
The only caveat is the parameter type of |
@lexi-lambda #303 (comment) is a very good explanation, and deserves not to be forgotten in a Github thread. It would absolutely be worth make it a blog post or an appendix to this proposal. Somewhere where it can be found in the future. |
Alexis, I confess that I have been a laggard on this proposal. My apologies. Arrows make my head spin. But let me say that I am very sympathetic to your goals of a simpler and more uniform treatment of arrow. The entire arrow-notation part of GHC has received very little well-informed love, and I for one am delighted that you are minded to rectify this lack. Thank you! I really appreciated your tutorial above. But I stumbled many times because there are no typing rules. Yet you have them, attached to your proposal (though I missed that at first). Moreover, as Arnaud says, your tutorial is too valuable to be left in a comment thread. Would you consider:
I think you have most or all this already, so I don't think I'm asking you to do new work. Please say if I am wrong about this. But, coming un-informed to this, I found it hard to be sure what changes are being proposed, in the context of the full system. If it would help, I've managed to dig out the source Latex for the Type and translation rules for arrow notation in GHC (2004) document. Regardless, before we are finished with landing all this, it would be great to have an updated version of that (very sketchy and incomplete) document, as a permanent guide to this part of GHC. Returning to the payload,
Side note: I always stumble on the difference between In conclusion: I'm sure we should accept this proposal in some form. I'm just trying to be sure it can't be simpler. |
In addition to the comments by Simon, during the Committee discussion the "wired-in-ness" of te type families was discussed, and we seem to lean towards not discussing that issue in the proposal, but leave it as an implementation detail. @lexi-lambda I am going to put the proposal again as "in review", until you feel comfortable with submitting it again |
Thanks for pinging me about this—I had actually forgotten that I had not yet responded to Simon’s comments! I am unfortunately likely going to be pretty tied up for the next week or so, but I will get on top of this after that. |
No rush, Alexis. Arrows are tricky, and a bit unfamiliar to many (incl me), so there is more "tutorial" to do. But I don't think that is wasted effort -- quite the reverse, as I say above I hope we can capture some of the core ideas in a more durable form. |
Yes! I intend to significantly rewrite the section of the User’s Guide to incorporate some of that information, and I’d be happy to create a wiki page with some of the less user-facing details. Some of that information is also included in Notes in the current implementation PR.
I agree! I think it would be helpful, and I am hoping to expand upon the current Ott model with some explanatory prose that adds some much-needed context to the rules themselves.
Yes, that is a good point. I have added an entirely new section to the proposal that discusses the rationale for the GHC 7.8 change, just below the Motivation section. It is subtle, so feel free to ask for clarification, but I am hopeful that will help somewhat with your confusion.
There are several questions here, and I am not sure I fully understand all of them. You ask “does pairing suffice?” but I am unsure what you have in mind by “pairing,” as Perhaps you mean “could it use nested pairs instead of flat tuples,” and the answer is “sure.” But that would just involve a different definition of
Yes, it is quite subtle. However, the proposal actually already does provide an example, though it may not be obvious to find. Take a look at the subsection entitled “A worked example using
Does that help?
The answer is that
|
After grasping at straws for a bit, I thought I might have found a solution to having just one one type family, invective via |
I agree that it's a Bad Thing to change the user experience for the However, I am not yet convinced that the way to solve the type inference For example in the "command typing" type rule the judgement has form
Now the
Now we don't need to form tuples or anything. In the code of GHC today I see
But we could instead have
where that list is the sigma_1 .. sigma_n above. It may not be easy to work all this out in a written dialogue -- I need |
The crux of the problem is that, when control operators come into play, GHC can’t immediately determine how many values are on the stack. This is why your proposed solution isn’t enough. You say
but now we’ve lifted the list out of the language (i.e. Haskell’s type language) and into the metalanguage (i.e. the GHC typechecker). This is trouble, because we may need to run the constraint solver to figure out how many values are on the stack. If you want a concrete example of a pathological scenario, consider code like this: foo :: Int -> String
foo = proc x -> (| id (show -< x) |) Here, the control operator used is foo :: Int -> String
foo = proc x -> (| id (\y -> show -< y) |) x which does the same thing, but it passes
If GHC’s This proposal hinges on pushing the list from the metalanguage into the language by using a type-level list. Then we can just emit some equalities for the constraint solver to deal with later: [W] t1 ~ ArrowEnvTup e1 s1
[W] t1 ~ ArrowEnvTup e2 s2
[W] s1 ~ '[]
[W] t3 ~ ArrowEnv Int
[W] (t1 -> t4) ~ (t3 -> String) The constraint solver can use its entire bag of tricks to find a solution to these constraints. Sometimes that means propagating information top-down, sometimes it means propagating it bottom-up. This is where the injectivity of Any simpler solution must compromise on this in some way. Maybe we don’t want to accept things like The solution in this proposal is the simplest design I can come up with that is both clearly sufficiently-general and does not break type inference. |
Fair enough. But we should make a sharp distinction between
The inference engine has lots of to-and-fro information flow, true enough. But that typically does not show through in the specfication. To take an example, the declarative type system for ML often guesses monotypes out of thin air, such as
Here tau comes from nowhere. That's the spec. The inference engine uses a unification variable, and does lots of unification, to figure out what tau should be. But no details about unification variables or unification show up in the specification. The bit I'm not convinced about, then, is whether ArrowTup and friends need to be in the specification. Implementation/inference engine perhaps, but do they have to be part of the spec? For example, maybe rule CMD_APPF could look something
That is, after type inference is complete we need to know n, and s1..sn. Then there's a separate question about how to implement type inference, which will involve constraints and, quite possibly, type families. A call might help? @rae may have views here |
I want to make clear that I completely agree with this. My justification is the typing rules in this proposal are perfectly declarative because Sᴛᴋ and Eɴᴠ are metafunctions. They aren’t type families. Now, having said that, arguably there are two ways I let implementation details leak into the rules beyond what I should have:
So I think your point is a good one, and perhaps I should make those changes. But I think the more algorithmic rules have value, too, if only as an explanatory tool for the implementation.
Here’s my point of view: Paterson’s old rules had the Sᴛᴋ and Eɴᴠ metafunctions, too, just with a different syntax. His syntax looked like this: It has already been discussed upthread how this syntax is confusing. So I decided to make my metafunctions more explicit. On the other hand, perhaps the flat-tuple representation eliminates the potential for confusion? Maybe I am now in fact justified to use Paterson’s notation!
Indeed. Yet I think it would be a bad idea for this proposal to not mention any algorithmic details, as the reason arrow notation doesn’t already work this way today is primarily due to implementation obstacles, not the lack of a worked system of declarative rules. So I think focusing entirely on the declarative system would rather miss the point. |
…actually, let me comment on this point a little bit more. There is something you are alluding to here, namely the idea that we could solve this problem without any type families. And you are, of course, quite right: we could introduce an entirely new type of constraint within GHC specifically to support arrow notation. Then we could keep these special “arrow notation constraints” entirely out of the user’s face, which would most definitely be an improved experience. But I did not list that as an alternative in this proposal because I did not think such a thing had any hope of being accepted. I figured it would be far too much additional logic too deeply wired into the GHC typechecker to support a feature that is already somewhat maligned. The type families approach seemed like a compromise: a way to encode things into the type system without such a large maintenance burden. But if you disagree, and you think such an idea really ought to be considered, by all means, say so! |
OK, this is good. I had totally conflated your meta-functions and the new type functions. This is confusing territory, not easy to express clearly. Suggestions:
Now \overline{\tau} is a metavariable (just like \tau itself) which has two productions as above. You may want a way to turn such a list of types into a tuple, by including it in the sytnax of types, something liie
where I'm using the round brackets to turn a list of types t1, t2, ... tn, into a tuple type (t1, t2, ..., tn). I think this might be a quieter notation, and less misleading (in the way I was misled), that would do what you want. It's important to be clear about the difference between a sequence of two types t1,t2 and a one-element sequence whose only element happens to be a pair thus (t1,t2). This is a potent source of confusion. Something like this will really help not only the proposal, but our permanent description of what we have. Returning to type inference, I'm not against using type families. But if we know they are an implementation artefact, that will help us when designing error messages etc -- and it means we can change the details at will because it's not part of the language spec. |
@simonpj asked me to pipe up here, but I'm not really sure what to say to really move this conversation forward:
My bottom line is that I'm quite happy with this proposal and think we should try to move forward with it. |
I agree. Also, my proposal (in #303 (comment)) for avoiding the Env-Stack distinction goes further in exposing the (now single) type family to the user so as not to loose expressive power w.r.t. the proposal as written. |
@simonpj had a chat about this proposal to resolve our differences (he wants more clarification; I'm not feeling that as strongly). We agreed that a good way to move forward was simply to add a little bit of tweaking to the ott spec (rendered at https://github.com/lexi-lambda/ghc-proposals/blob/constraint-based-arrow-notation/proposals/0000-typechecking-rules.pdf). I think the changes would be quite modest. @lexi-lambda, would you want me to suggest some concrete changes, or just submit a PR against your repo? |
Just to clarify more what I meant in (my comment above](https://github.com/ghc-proposals/ghc-proposals/pull/303#issuecomment-652636138]
To be clear, I'm fully in support of the goal, and I think you've done a fantastic job of explaining stuff. I'm sure we should accept this proposal. I'd just like to to be clear, to everyone, just what we are accepting. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Edit: I had hoped GH would put this review comment below my inline comments. Please read those before this one.
The bit I'm not convinced about, then, is whether ArrowTup and friends need to be in the specification. Implementation/inference engine perhaps, but do they have to be part of the spec?
Having only had a cursory look over the comments after having read the typing rules, I wonder the same thing as Simon above. I see no material change to the typing rules at all! It's basically just two opaque uninterpreted functions slammed in-between, which seem to give a representation to a list of types in different ways. Ah yes, you acknowledge that:
I want to make clear that I completely agree with this. My justification is the typing rules in this proposal are perfectly declarative because Sᴛᴋ and Eɴᴠ are metafunctions. They aren’t type families.
So, to be clear: If I understand correctly, we could just stick to Patterson's rules and substitute the confusing overloaded use of (tau,taus)
(which denotes a cons in the meta-language overbar notation, not an object-language pair type, as I understand it. Reading #303 (comment), it seems that is not entirely correct), maybe to something like [tau taus]
and we wouldn't need to touch the declarative spec at all, right?
We could then say "well, but we want to represent these Stk [taus]
like this and this Env [taus]
like this", for obvious efficiency reasons. Then all that is left to do is either
- Use wired-in type families to represent the constraints that we need during inference, leaning on a lot of existing infrastructure for coping with TyFams, or
- Come up with new constraints that need special handling in the inference engine
I have no idea which of the two is more feasible. (2) might seem daunting, but maybe it wins when we start trying to give good error messages?
}} | ||
|
||
metavar x, y ::= {{ com variables }} | ||
indexvar i, n ::= {{ com indicies }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
indexvar i, n ::= {{ com indicies }} | |
indexvar i, n ::= {{ com indices }} |
|
||
G {{ tex \Gamma }}, D {{ tex \Delta }} :: 'ctxt_' ::= {{ com contexts }} | ||
| empty :: :: Empty | ||
| G1 , ... , Gn :: :: Concat {{ com concatenation }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This definition lacks a Cons operator and thus will always be empty.
(Later) Ah, you are treating G
as a mostly abstract (open union) type with only 2 guaranteed constructors. You seem to introduce new type bindings by doing pat : t => D
. Fair enough, but not immediately obvious.
Could you maybe indicate where you expect to be more constructors with an \ldots? For example in pat
or e
.
At first I thought you are leaving the first line (after ::=
) blank to indicate open union, but cmd
and alts
could definitely be closed, couldn't they?! On the other hand you aren't trying to do an exhaustive match anywhere, so...
defns | ||
TcDs :: '' ::= | ||
|
||
defn pat : t => D :: :: TcPat :: 'Pat_' {{ com pattern typing }} by |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't this judgment form include G
? E.g., t1
as well as pat
(through view patterns) might reference stuff from G
.
I know you just copied it from the 2004 paper and it hardly matters for the core of your proposal, but we might as well fix it...
G |- e1 : a (Stk[t1 ': s]) t2 | ||
G,D |- e2 : t1 | ||
----------------------------- :: AppF | ||
G|D |-a e1 -< e2 : s --> t2 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not so deep into the implementation of our type inference engine, so pardon if what I say below is nonesense.
You say that Stk
is indeed not injective. How do you maintain principle types here?
Given a sufficiently polymorphic defn for e2
(e.g. e2 :: alpha
where alpha is flexible), both of these derivations could work IMO:
G |- e1 : a (Int, Int) t2
G,D |- e2 : Int
----------------------------- :: AppF
G|D |-a e1 -< e2 : '[Int] --> t2
and
G |- e1 : a (Int, Int) t2
G,D |- e2 : (Int, Int)
----------------------------- :: AppF
G|D |-a e1 -< e2 : '[] --> t2
And neither type in the conclusion is more general than the other.
I now found a similar example in your proposal: https://github.com/lexi-lambda/ghc-proposals/blob/constraint-based-arrow-notation/proposals/0000-constraint-based-arrow-notation.md#error-reporting. I think the proposal should perhaps be more clear about these kind of consequences.
@sgraf812 Let me preface this by saying that everything in the Ott model is currently exclusively about generating something that is (hopefully) useful to people once it is typeset. Some of the details you note—such as underspecifying the structure of contexts and having a somewhat naïve view of pattern typing—are really just because the model is intended to show all the details of arrow notation, not the rest of GHC’s modern flavor of Haskell. And in the context of Haskell 2010, Regardless, your comments suggest that the current approach was not successful in distilling those details and effectively communicating them, so something does still need to change. It’s been ages since I actually looked at that Ott model at all, but I’ll find some time to go through it again and see if I can incorporate your suggestions.
Yes, I think this is currently the main problem with the proposal as-written, and I think Simon’s criticism (and by extension yours) is on the money. The proposal is currently insufficiently clear about a few things:
Why did I screw this up so badly in my first draft of the proposal? Because when I wrote it, it wasn’t actually clear to me that everyone would view the change in behavior as a regression rather than as a change in specification that I would have to argue against, so I presented it primarily through that point of view. However, the discussion revealed that (a) almost nobody knew this feature existed in the first place, and (b) even fewer realized any change had ever occurred. So, with that added context, I think the main thing that needs revision here is the framing of the proposal. It would be better to just assert that the current behavior is broken and make explicit that the proposal is about an implementation technique to fix it. That should, I hope, clear up the confusion.
Yes, absolutely correct! And to be honest, I have thought about option 2 myself quite a few times since I originally wrote this proposal, especially after I attempted to implement everything in GHC. However, I did not list it as a possible alternative in the proposal for two reasons:
Unfortunately, when I went to write the implementation, I discovered that getting the sort of custom error reporting I wanted for wired-in type families is nearly impossible given the way they are currently handled in the constraint solver. So, having learned that, I think it would be valuable to revise the proposal in three ways:
After such a revision, the proposal would not really be in a state to be approved, because it leaves a significant question open: should we use type families or custom constraints? But hopefully the revised proposal would make it easier for people to understand the content of the proposal and the associated tradeoffs, so we could resurrect the conversation and pick which implementation technique is preferable before moving forward. Does all that sound good to you? If it does, I may attempt to make those revisions myself at some point (though it’s been long enough that I admit I’m not certain exactly how much effort it’s worth it to me to personally invest in getting this proposal merged). Footnotes
|
It does to me.
I know the feeling. But is your lack of motivation about the proposal or about the implementation? I sense that perhaps you are motivated to get this implemented, but less motivated to have a beautiful writeup of the various implementation choices? I ask because a GHC proposal doesn't need to say much about implementation. Indeed, since we now understand that this one is simply a bug-fix, you could argue that we should simply fix the bug. Since it is a long standing bug, a proposal to describe the change may be good practice, and helpful for some -- but is far less of a big deal than proposals that introduce new features. That said, even if it's not a formal proposal, it is really helpful to discuss, and get consensus around, design choices before getting elbow deep in code. So it's not wasted work! |
I'm still not convinced it is. Which type variables may appear in It's underspecified in the original spec as well, so no worries. And I think we established that we are primarily interested in the bug fix nature of this proposal. That is not to say that it's not worthwhile to fix the lack of scoping in the spec... As part of this proposal or in a separate one, to decouple the orthogonal discussion threads.
I don't think you have! It still make sense, but it just needs the change in framing that you suggest.
I'm not that familiar with what should and what shouldn't be part of the proposal, but the user-facing part here is primarily related to error messages. If we can make the impl so that users a) can't write
I often feel this way, by the way... But writing a proposal often leads to a carefully-designed implementation, so it's not all in vain. |
I still wonder if the right thing to do for arrows is to factor them out into a plugin or similar. They are little used, many issues have accumulated, and it would be a good demonstration of GHC being sufficient modular/extensible to support new syntax with interesting semantics. This is a bunch of work up front, but gets a bunch of little-maintained code out of GHC, which I think is good because we have too many features. It also means @lexi-lambda is free to experiment with the implementation and semantics, without being bogged down by the requirements of this proposal process. Finally the others of us who would like to see something more like https://arxiv.org/pdf/1007.2885.pdf and https://github.com/conal/concat, and therefore have inherently conflicting goals, can use the same plugin foundation to explore in a different direction. This seems like a great a win-in provided we can get over the initial factor-out hump: less junk collecting dust in GHC, more concurrent exploration of design space. Fundamentally, I think GHC needs to start shedding features for santiy's sake, and this seems like a good high-value place to begin. |
I don’t think that’s really true… I’m not sure I’m much more excited to work on the implementation, either. :) But I would like to be able to call it done, since it would be a pity to have the work I’ve already done go to waste, and that requires doing more work, so I’d like to find the motivation to finish it regardless of how much it may or may not excite me.
Yes, fair enough… and in any case, I agree it would be useful clarification here.
I agree, and I actually care quite a lot about getting the proposal into a good place. I would not be at all satisfied if the implementation were merged without feeling like the proposal and updates to the user’s guide were satisfactory, too. It’s just a matter of doing it!
I don’t agree at all, but that’s probably neither here nor there at this point, anyway. Plugins are still too limited to support arrow notation satisfactorily, so as long as it’s in GHC (and I will certainly not be drafting any proposals to remove it), I figure it might as well work properly. |
This is not a full revision, since some of the proposal text still refers to the old wording, the Ott model is not updated, and some things need to be generally cleaned up. But it’s a step in the right direction.
This is a proposal for modifying GHC’s desugaring rules for custom control operators in arrow notation (aka
(|
banana brackets|)
). The modified desugaring takes advantage of modern GHC features to support more operators that appear in the wild and be more faithful to the original paper, A New Notation for Arrows, and the old implementation used prior to GHC 7.8.Rendered