Lambda the Ultimate

inactiveTopic Macros vs. Higher-order Programming
started 10/27/2003; 8:39:42 AM - last post 11/5/2003; 2:33:22 PM
Peter Van Roy - Macros vs. Higher-order Programming  blueArrow
10/27/2003; 8:39:42 AM (reads: 16781, responses: 82)
Macros vs. Higher-order Programming
I have been hearing a lot about macros recently, from Paul Graham's book onLisp to the discussions here on Lambda.

I want to understand clearly what all the hubbub is about. What are the respective advantages and disadvantages of macros (i.e., Lisp style) and higher-order programming? To me the situation seems to be as follows (caveat: it has been decades since I actually did real programming with macros). Higher-order programming is more expressive. Macros are syntactically more concise and give more efficient code. (Note that expressiveness and verbosity only have a tenuous connection!)

What I would like to have in Oz is a mechanism for defining linguistic abstractions within the language. The major thing that's needed is a nice way to reify the language syntax into a data structure and back. For example, a quoting/backquoting mechanism inside of which the usual language syntax would be considered as defining a data structure. This would have to be complemented with other mechanisms like determining context (a function that takes a fragment in language syntax and returns its free identifiers).

The main advantage of this mechanism would be convenience. It would not increase expressiveness since the language is already higher-order.

What do you think?
Posted to general by Peter Van Roy on 10/27/03; 8:41:47 AM

Sjoerd Visscher - Re: Macros vs. Higher-order Programming  blueArrow
10/27/2003; 9:27:33 AM (reads: 2660, responses: 1)
A good higher-order language compiler that does inline function expansion could match the speed of macros. I don't buy "syntactically more concise" either. That's simply because current implementations chose to assign short syntax to macros and longer syntax for functions.

Could you give a practical example of your ideas for macros in Oz? (in code)

Vesa Karvonen - Re: Macros vs. Higher-order Programming  blueArrow
10/27/2003; 10:15:36 AM (reads: 2638, responses: 0)
A good higher-order language compiler that does inline function expansion could match the speed of macros.

No, you must be talking about a good partial evaluator. Take a look at some of the macros that come with many current Scheme implementations (e.g. PLT MzScheme). For example, there are LALR-parser generator macros (a separate Yacc is obsolete). I have also personally implemented type inferencing / checking macros in Scheme. There is no way to emulate the same thing (compile-time type checking) with just higher-order functions in Scheme.

Jacob Matthews - Re: Macros vs. Higher-order Programming  blueArrow
10/27/2003; 10:30:52 AM (reads: 2626, responses: 0)
Felleisen answered this question better than I can on the LL1 listserv a while back.

Daniel Yokomiso - Re: Macros vs. Higher-order Programming  blueArrow
10/27/2003; 11:01:19 AM (reads: 2598, responses: 0)
With macros we can add a layer of sugar to some kinds of expressions that would otherwise be cumbersome. For example we can write Haskell's do notation or list comprehensions using macros instead of dedicated syntax. In these cases the special notation is "syntatically more concise" than the higher-order syntax. With a good macro system we can make let, where, etc. be macros generating lambdas and application. In these cases too the special notation is clearer than the higher-order syntax.

Daniel Yokomiso - Re: Macros vs. Higher-order Programming  blueArrow
10/27/2003; 11:13:47 AM (reads: 2593, responses: 0)
There are two good discussions here about macros:
ll1-discuss on macros and JScript is a Functional Language

My wishlist for a macro system is: full BNF-like support for form writing, pattern matching for syntax rules and multi-staged evaluation. In the end it should be able to support Do expressions as a macro (Haskell-like syntax):

class Macro a where
    parse :: Tokens -> a

DoExp = Bind | DoLet | Exp deriving (Macro, show)

syntax do :: [DoExp] -> Code syntax do [e] = Value e syntax do (e:stmts) = Apply (Apply (Fun (>>)) e) (macroCall do stmts) syntax do ((p <- e):stmts) = Apply (Apply (Fun (>>=)) e) (Lambda p (macroCall do stmts)) syntax do ((DoLet decls):stmts) = Let decls (macroCall do stmts) syntax do [] = error "Do cannot be empty!"

I use "<-" to be constructor for binds and "[:" "]" to denote macro calls (in the form "[:<macro-name> <arguments>]"). The macro's type gives us a parse expression to transform the tokens in some data-struture for pattern matching and the value of the macro has "Code" type. The compiler is responsible for running and evaluating the macro calls, but we can use a macro to generate a macro (after all a macro is a valid form of code) for later use.

IMHO if we have a simple (non-cryptic) syntax extension system people will be confortable in using it. Also I think is important to distinguish macro calls from normal calls, so people can see the possible borders of distinct syntax in their code.

logarithm - Re: Macros vs. Higher-order Programming  blueArrow
10/27/2003; 12:26:47 PM (reads: 2530, responses: 0)
I am having trouble locating the intersection at which higher-order programming and macros would be in conflict. Macros can reduce some tedium in programming by automating code; this can have the neat consequence of allowing you to -- as some would say -- "extend" what your language can do. The consequence seems far neater when combined with closures such as with LISP and Scheme.

Higher-order programming, on the other hand, lets you pass functions around to other functions. That's also neat; you can generalize some behavior (like making a new list based on an old one with map) into one function, rather than having to write several functions that do essentially the same thing.

Both of these approaches can make your program expressed equally but with less effort than without either one. They both add to the expressiveness of the language; but they do so in different ways. I think that one really needn't consider macros as versus higher-order programming. (And just because you can inline (e.g. macroify) higher-order functions doesn't, I think, mean that macros should lose their appeal.)

Mark Evans - Re: Macros vs. Higher-order Programming  blueArrow
10/27/2003; 4:41:09 PM (reads: 2380, responses: 0)

Might it be unwise to phrase topics as "X versus Y?" The formulation presupposes a conflict, which even if it holds, might in turn conceal a false-choice fallacy (there can be third alternatives, Z). Perhaps "X and Y?" instead?

Luke Gorrie - Re: Macros vs. Higher-order Programming  blueArrow
10/27/2003; 6:21:36 PM (reads: 2353, responses: 0)
What I like about macros is the great programs people write with them. I'm enthusiastic about macros because I want to be able to write programs as good.

The most wonderfully concise example I've seen was Darius Bacon's macro for transforming recursive functions into dynamic programming right here on LtU. Isn't it marvellous?

Two large examples that I've found especially awe-inspiring are Peter Norvig's book Paradigms of Artificial Intelligence Programming and Guy Steele's thesis on constraint programming. Norvig's book is the best demonstration of macros and general Lisp programming that I've seen, and Steele's the best demonstration of the whole Lisp programming environment and style applied to something "new" and cool.

I've only read parts of On Lisp, but I think Norvig's work is much more likely to impress users of fancy modern languages.

As for "expressive power", this isn't a term I'm very comfortable with. As I interpret it you could remove pattern-matching from Erlang without technically reducing its expressive power, but it wouldn't be the same language, and I wouldn't want to use it. Unless you have a better definition of expressive power than I do (not unlikely!), it all seems dangerously close to those "Turing-equivalence" arguments that miss the point. If you can write Darius Bacon and Peter Norvig's programs just as well without macros, then I'll happily concede the point (and be very eager to learn how to do it).

I don't discount the efficiency argument either. If a program has to be really fast, macros seem like one of the more practical ways to handle it without sacrificing clarity. There's a very interesting post by Thomas Burdick about this.

I hope the stuff cited above shows the case for macros.

Luke Gorrie - Re: Macros vs. Higher-order Programming  blueArrow
10/27/2003; 6:48:33 PM (reads: 2334, responses: 0)
logarithm: I think the LL1 threads that Daniel cited have an illuminating discussion of the "vs." in "Higher-order vs. Macros". It's impressive to see how languages like Haskell, ML, and Smalltalk use higher-orderness (or laziness) to tackle many problems that Lisp uses macros for, such as conditionals, iteration, and so on. Avi Bryant shook my faith a bit :-)

Luke Gorrie - Re: Macros vs. Higher-order Programming  blueArrow
10/27/2003; 7:21:48 PM (reads: 2324, responses: 3)
My wishlist for a macro system is: full BNF-like support for form writing...

Do you mean being able to "splice" new tokens/productions into the language's parser? If so, one extra thing to consider is supporting the extended syntaxes in your editor, e.g. for automatic indentation. This could take a fair bit of extra work.

I'm really interested in the problem of incrementally parsing sourcecode in a text editor, for the purpose of indentation and other "structural" things. It seems much harder than parsing in general because most of the time the source you're working with is only partially written. If this is a "solved" problem, I would really appreciate a reference.

For Lisp syntax at least it is solved. Emacs has a generic indentation function for which it's a one-liner to say how to indent new language constructs, and it's easy to operate locally on sexps even when their enclosing expressions aren't well-formed.

For Erlang syntax this is interesting. There has been a proposal to extend exception handling with a "try/catch" construct for some time, and recently we had a look at how to make the Emacs mode indent it properly. It turns out to be pretty hard! The current indenter is a simple keyword-recognising push-down automaton, but the new construct would overload a keyword in a way that this automaton just can't recognise. To handle it correctly might require a rewrite of the indenter as a full-blown parser, or be avoided by a change of language syntax.

The devil's in the details. I hope the current research is considering these practicalities and not overlooking them by being too focused on semantics. (As I say, if they've solved the "parser-in-editor" problem, I'm impressed and I want to hear about it :-))

(This begs the philosophical question: if Niklaus Wirth were to design a language with extensible syntax and good editor support, would he base it on S-expressions?)

sean - Re: Macros vs. Higher-order Programming  blueArrow
10/27/2003; 9:26:18 PM (reads: 2287, responses: 0)
You may already be aware of it, but Eric Ludlam's Semantic Bovinator (http://cedet.sourceforge.net/semantic.shtml) is trying to do something like this, working off a more-or-less grammar. At the other extreme, cperl-mode does an incredible job parsing Perl. But a glance at the code tells me that it probably won't generalize...

/s

Vlad S. - Re: Macros vs. Higher-order Programming  blueArrow
10/27/2003; 10:09:42 PM (reads: 2275, responses: 0)
Since I don't know as much about Scheme hygenic macros and syntax-case, I'll talk about Common Lisp macros and their usage.

First, it helps to know exactly what Common Lisp macros do (many people seem to be confused about this in one way or another). CL macros are first-class functions that are passed their entire form (the list used to invoke them - none of the elements are evaluated) at evaluation time (this means that the form being evaluated is already a list), and then are expected to return a form that is stuck back in the place of the macro-call* in the list. Notice the definition of form: it can include self-evaluating objects (which includes all the types and classes, except the symbol and list). This amounts to CL macros being hacks to control evaluation, which incidentally are popular for syntax re-writing.

I think evaluation control in Lisp is another thing that people gloss over. With respect to macros, the only reason^ you need quote is because there is no special form to handle "free" symbols in CL. Quoting for lists can be implemented as a macro that returns a list, that, when evaluated produces a list of (possibly nested lists) of quoted symbols (example: (quote (a b)) => (list (quote a) (quote b))). Self-evaluating objects evaluate to themselves anyway. Since you are considering representing non-atomic forms as data structures, this will probably apply as well.

Are these hacks worth it? I would say yes. Many people complain that Lisp's macros are there just to cover up the fact that it doesn't provide enough useful constructs (I happen to think that the ability to provide your own constructs to fit your problem domain is a lot more useful, but I also suspect these people have never seen the size of the standard in dead tree form). The convenience argument is pretty easy to convince yourself of once you see what some useful Lisp macros expand to. The other reason, efficiency, likewise turns out to be a valid one. The CL standard tried to be as much as practically possible implementation and machine agnostic, and encourages highly portable coding. Macros are a very good way of providing portable and (once again more importantly) domain-specific optimizations.

The downside? Higher-order programming and macros do in fact come in conflict, since they both look the same (the famous example being (apply (function and) some-list)). This necessitates explicitly wrapping the macro in a lambda. I will have to agree with Daniel that you might want to distinguish the two, at least at definition-time. This will allow the code-walker to implicitly wrap macros in offending sections. This type of analysis is not possible in CL since at any time a symbol may be defined to be a macro~ (besides which it would seriously violate the language specification in a number of places).

* - This terminology may not be correct, but I think it described what is going on best.

^ - This skits around the (largely theoretical) problem of quote and constant forms and literal objects in CL.

~ - Sorry, but I am unfamiliar with Oz, and it may very well allow similar dynamic programming.

Luke Gorrie - Re: Macros vs. Higher-order Programming  blueArrow
10/27/2003; 11:29:41 PM (reads: 2254, responses: 0)
Come to think of it, macros were awfully handy in Distel. That's a practical port of a recognisable subset of Erlang to Emacs Lisp, and is interoperable with regular Erlang via the distribution protocol.

That particular example would possibly be easier in modern functional languages because they're more similar to Erlang to begin with. Still, Lisp did a great job of adopting Erlang features quite foreign to itself, like pattern matching. (But I was really hurting for continuations or the like :-))

It seems like a nice example because it's a real-world case of one language needing to mimic another.

Peter Van Roy - Re: Macros vs. Higher-order Programming  blueArrow
10/27/2003; 11:42:26 PM (reads: 2262, responses: 0)
As for "expressive power", this isn't a term I'm very comfortable with. As I interpret it you could remove pattern-matching from Erlang without technically reducing its expressive power, but it wouldn't be the same language, and I wouldn't want to use it. Unless you have a better definition of expressive power than I do (not unlikely!), it all seems dangerously close to those "Turing-equivalence" arguments that miss the point. If you can write Darius Bacon and Peter Norvig's programs just as well without macros, then I'll happily concede the point (and be very eager to learn how to do it).

My definition of expressive power has nothing to do with Turing equivalence. It's something like this: adding a new language construct increases the expressive power of a language if it can only be expressed in the original language with nonlocal changes. For example, adding exceptions to the declarative model of chapter 2 in CTM increases the expressive power, since the only way to do the same thing in the declarative model is to add boolean variables and checks to all procedures (checks like: if error then return immediately). This was proved in the 1970s! (All of CTM is structured in this way, with computation models becoming more and more expressive, in the precise sense defined above. See Appendix D of CTM for a brief summary.)

Removing pattern matching from Erlang is not so simple as it seems, since there is more than just pattern matching going on! The Erlang 'receive' statement accesses the process's mailbox, which is otherwise hidden. Removing the 'receive' statement would indeed reduce Erlang's expressive power. On the other hand, changing 'receive' so that the user uses nested ifs instead of pattern matching would not reduce the expressive power according to my definition. (If you want to understand exactly what the 'receive' statement does, see section 5.7 of CTM.)

Peter Van Roy - Re: Macros vs. Higher-order Programming  blueArrow
10/28/2003; 12:00:20 AM (reads: 2242, responses: 0)
What I like about macros is the great programs people write with them. I'm enthusiastic about macros because I want to be able to write programs as good.

Fair enough! BTW, if you want to see a neat program written in Oz, check out the Oz Minesweeper, written by Raphaël Collet. This looks like the usual brainless minesweeper game, but look again: it uses deduction to aid the player. It can do both local deduction and global deduction, so you can see the difference between the two. With it, you can play fields of 2000 squares (useless, of course, but it's fun to see the deduction rippling through the playing field). The whole program is around 500 lines, including everything (all the GUI code, deduction code, and all utilities). It uses the declarative/procedural GUI tool (QTk module) and constraint-based deduction.

Another neat program is FlexClock, by Donatien Grolaux. This is a clock utility that changes its view depending on window size. It's especially impressive if you can configure the window manager to send update events while you are resizing the window, since the view will change in a marvelously fluid way. It uses QTk; see chapter 10 of CTM for an explanation of how it works. The whole program including 16 views is around 400 lines (and half of that is the definitions of the analog clock widget and the calendar widget).

Luke Gorrie - Re: Macros vs. Higher-order Programming  blueArrow
10/28/2003; 12:37:42 AM (reads: 2223, responses: 0)
My definition of expressive power has nothing to do with Turing equivalence. It's something like this: adding a new language construct increases the expressive power of a language if it can only be expressed in the original language with nonlocal changes.

I see. So, by that definition the question is: would syntactic macros make a suitable chapter for the CTM book? I'll ponder this as I read :-)

Eric Ludlam's Semantic Bovinator

I'm not sure if the documentation is just a bit stale, but this seems to only parse the "top-level" expressions and not their subexpressions. That would make it nice for code-browser sort-of things, but not for indentation. (I should have a closer look, because it may just be that the docs aren't up to date.)

Daniel Yokomiso - Re: Macros vs. Higher-order Programming  blueArrow
10/28/2003; 3:26:08 AM (reads: 2166, responses: 0)
Eon will have syntax extension but it'll always be between square brackets:

[try
    ...
 finally
    ...]
An editor would always know where the macro call begin and end, so you could write a piece of code that would be aware of the macro syntax. I thought about this for some time and I think the best solution is to make macros first class so we can write a "indent :: Macro -> [String]" function for our macros.

Pascal Costanza - Re: Macros vs. Higher-order Programming  blueArrow
10/28/2003; 4:34:16 AM (reads: 2128, responses: 0)
I don't think that the claim that macros improve efficiency is the most important. I think the real advantage of macros is that you can abstract away from details of protocols. For me, the "Hello, World" of macro programming is this.

When you want to bind to a resource, you usually have to make sure that you also unbind again later on. The standard pattern in many languages, modulo syntactic variations, is this:

try: bind resource
     do something
finally: unbind resource

The fact that you need to place the unbind in a finally statement reflects the requirement a protocol imposes on the use of rources: You have to make sure to execute the necessay steps in a certain order, and you have to make sure that certain steps are always executed, no matter what. There are lots of examples like this in programming, and especially more complicated ones.

A macro allows me to abstract away from this. The same code, in a language with macro support, would look like this:

with-bound-resorce:
  do something

The macro "with-bound-resource" takes care of executing the right steps at the right time.

Common Lisp macros are especially powerful in this regard, because they can perform arbitrary analysis of the code that is being passed to them, and transform the code in arbitrary ways. This ensures that, no matter how complicated the protocol you need to adhere to, you can be sure that this is possible with Common Lisp macros.

Here is an example macro that illustrates this:

(defmacro p-eval (form)
  (if (atom form)
      `(print ,form)
    `(print
      (,(car form)
       ,@(mapcar (lambda (subform)
                   `(p-eval ,subform))
                 (cdr form))))))

This macro inserts print statements between each evaluation steps. Here is an example of its use:

> (p-eval (* 5 (+ 6 7)))
5
6
7
13
65
65

This demonstrates that you can reach all parts of the code that is being passed to a macro.

The fact that macros are usually applied at compile time just reflects the fact that you usually have the source code available at compile time, but not at runtime anymore. In principle, there is no reason not to allow macro expansion at runtime, but this would mean that you would also need to keep lexical environments and source code representation around at runtime.

Vesa Karvonen - Re: Macros vs. Higher-order Programming  blueArrow
10/28/2003; 4:59:02 AM (reads: 2108, responses: 0)
When you want to bind to a resource, you usually have to make sure that you also unbind again later on.

Perhaps I misunderstood what you meant, but it seems to me that you can basically do the same with higher-order functions. Instead of something like

  (bind-resource (<var> <init>) <form>)

you would have

  (call-with-resource (lambda (<var>) <form>) <init>).

I've actually used similar techniques even in Java. For example, I defined a simple method,

  executeTransaction(Connection conn, VoidToVoid transactionProc),

that took a functional object as an argument and took care of setting autocommit on and off and performing commit() or rollback() depending on whether the transaction procedure completed without exceptions or not.

Patrick Logan - Re: Macros vs. Higher-order Programming  blueArrow
10/28/2003; 7:32:13 AM (reads: 2033, responses: 0)
Instead of something like
  (bind-resource (<var> <init>) ...)
you would have
  (call-with-resource (lambda (<var>) ...) <init>)

You are right. In fact it is not uncommon to provide both forms in situations like this. The syntax for "hand coding" and the functional for "higher order programming".

This is somewhat common I think because Lisp has such wordy-ness for defining a function, i.e. (lambda (...) ...)

In Smalltalk, which has a cleaner syntax for functions, ironically, you would just do something like this...

someResource do: [ :var | var doSomething. etc.]

andrew cooke - Re: Macros vs. Higher-order Programming  blueArrow
10/28/2003; 2:20:22 PM (reads: 1887, responses: 0)
[in response to parsing for editors]

have you looked at parsing that produces forests of possible asts (i can't remember the name, but it must have been mentioned here, although searching for 'parse forest' failed)? that might lead to an approach for parsing code as it's being developed. you might have to wait for moore's law for a little while, though :o)

Vesa Karvonen - Re: Macros vs. Higher-order Programming  blueArrow
10/28/2003; 3:12:23 PM (reads: 1871, responses: 0)
I guess the reason why people think that higher-order functions and macros would somehow be solving the same things is that both are (usually, but not always for macros) parameterized by "behavioral code" rather than simple "dumb data". To the untrained eye, I guess, it makes them look the same.

Such a simplistic analysis does not uncover the truth, however. The truth is that macros have almost nothing to do with higher-order functions and when used properly, macros and higher-order function are used for solving completely different kinds of problems.

Macros are used for extending the syntax of a language. Macros are for syntactic abstraction, period. By their very nature, except in weird/broken language designs, macros are statically evaluated and manipulate the syntax of the language rather than run-time data. There is a semantic guarantee that all macros are evaluated in compile-time. This is completely orthogonal to higher-order functions.

A very clear example that manifests the difference between higher-order functions and macros is the ability of macros to describe static, syntactic type systems. It simply can not be done without a powerful enough syntactic abstraction facility. Higher-order functions will simply not do, because they are not evaluated statically and are not syntactic in nature.

The de facto standard syntax-case macro system of Scheme is powerful enough for making a language such as ML practically obsolete. (You can basically extend that to any statically typed, sequential, strict, imperative language.) What I mean is that except for petty differences in syntax, it is quite easy to extend Scheme using syntax-case macros to perform the exact same type inferencing as a language such as ML - with the exact same static guarantees as provided by ML.

I claim that macros, or any Turing-complete syntactic abstraction facility, clearly increases expressivity. There are invariants about programs that simply can not be expressed without such a facility [except with external tools] - not even with any theoretically possible fixed (non Turing-complete) type system. It should not be very difficult to prove this formally.

Ken Meltsner - Re: Macros vs. Higher-order Programming  blueArrow
10/28/2003; 4:50:44 PM (reads: 1833, responses: 0)
As a total aside: macros are usable by motivated, but not brilliant programmers. I wrote a Common Lisp macro that expanded Einstein-notation tensor math (repeated indexes imply loops, so you might write the Lisp equivalent of A(I) B(I, J) to produce a new tensor C(J)) into Lisp code.

It was an incredibly powerful trick -- vastly simplified various stress/strain terms in crystallography -- and although I didn't realize it at the time, an example of a domain-specific language with a "user-friendly" syntax.

I have no idea how I would have done this in a more conventional language, but it convinced me of the value of macros.

Luke Gorrie - Re: Macros vs. Higher-order Programming  blueArrow
10/28/2003; 6:53:33 PM (reads: 1811, responses: 0)
p-eval is a bold example, Pascal. :-)

It only handles a small set of inputs -- much smaller than eval does. For example, this expression will expand to an infinitely large form, so you'll exhaust all your memory trying to expand it:

(p-eval (p-eval (* 5 (+ 6 7))))

And the very fact that you have customizable syntax means that you can't generally recurse down subexpressions expecting them to be in (<function-name> . <args>) syntax. For example:

(p-eval (let ((x (foo))) (+ x x)))
macroexpand-->
(print
 (let (p-eval ((x (foo))))
   (print (+ (print x) (print x)))))

Which isn't a valid program. To expand correctly, you would need a "code-walker" that will expand the macros in subexpressions into some macroless kernel language that you understand, and then insert prints only where appropriate for that language. (I used such a code-walker to produce this full macroexpansion - note that it correctly avoided treating the p-eval in the let bindings list as a macro.)

Or perhaps you could peek at the interpreter's implementation and add some `advice' to eval and apply for the dynamic extent of your call? :-)

Macros do give you a lot of rope. I believe it's at this point that we Lisp fans traditionally invoke analogies about master chefs and sharp knives. :-)

Mark Evans - Re: Macros vs. Higher-order Programming  blueArrow
10/28/2003; 7:49:21 PM (reads: 1781, responses: 0)

Peter what do you think about Stratego? (Haskell fans note HSX.)

Avi Bryant - Re: Macros vs. Higher-order Programming  blueArrow
10/28/2003; 9:16:46 PM (reads: 1761, responses: 0)
Luke Gorrie: Avi Bryant shook my faith a bit

Now, that's what I like to hear...

Luke Gorrie - Re: Macros vs. Higher-order Programming  blueArrow
10/28/2003; 9:52:01 PM (reads: 1762, responses: 0)
Avi, did you ever find the counter-example you were looking for? If so, what was it? And if not, what do you make of Darius Bacon's defun-recurrence dynamic-programming macro cited above?

I'm surprised that Guy Steele's example didn't do it. Surely Smalltalk must be exceptional as a macroless language that can handle that problem.

Sjoerd Visscher - Re: Macros vs. Higher-order Programming  blueArrow
10/29/2003; 1:00:49 AM (reads: 1725, responses: 1)
Both p-eval and Guy Steele's example aren't very convincing arguments for macros. They can both be done with HOP if the language has good enough introspection to be able to inspect the code inside a function.

Ehud Lamm - Re: Macros vs. Higher-order Programming  blueArrow
10/29/2003; 1:52:21 AM (reads: 1716, responses: 0)
But you are changing the rules of the game!

Regarding functions as opaque is a standard, fundamental assumption. Normally when people argue that HOF can replace most usesof macros, they mean that this can be done without changing anything else.

Naturally, if you remove this assumption, arguments about source-level issue like macros are bound to change.

Same goes for arguing for an AST data type, and the ability to write code to handle these ASTs prior to evaluation. By introducing these machanisms you are proposing an alternative macro programming model, not arguing against macros.

Sjoerd Visscher - Re: Macros vs. Higher-order Programming  blueArrow
10/29/2003; 2:26:44 AM (reads: 1687, responses: 0)
What are the rules? Having access to the AST has as much to do with macros as with HOP, if not less. HOP requires valid functions, so the AST is already known. Macros are for extending the syntax, so having access to the AST should only work in special cases.

Avi Bryant - Re: Macros vs. Higher-order Programming  blueArrow
10/29/2003; 2:45:04 AM (reads: 1683, responses: 0)
Luke, I still haven't found a good counter example - but I haven't spent much time looking.

defun-recurrence isn't it - it's quite possible I'm missing something, because I don't really see why macros are necessary there. Doesn't this basically do the same thing?

(define (recurrence f)
   (lambda (n)
     (let ((vals (make-vector (+ n 1))))
        (dotimes (i (+ n 1))
           (vector-set!
              vals
              i
              (f (lambda (x) (vector-ref vals x)) i)))
       (vector-ref vals n))))

(define fib (recurrence (lambda (fib n) (if (< n 2) n (+ (fib (- n 1)) (fib (- n 2))))))

Luke Gorrie - Re: Macros vs. Higher-order Programming  blueArrow
10/29/2003; 2:53:54 AM (reads: 1682, responses: 0)
Um, yeah.

Back to the drawing board..

Frank Atanassow - Re: Macros vs. Higher-order Programming  blueArrow
10/29/2003; 3:31:18 AM (reads: 1654, responses: 0)
Vesa wrote: What I mean is that except for petty differences in syntax, it is quite easy to extend Scheme using syntax-case macros to perform the exact same type inferencing as a language such as ML - with the exact same static guarantees as provided by ML.

No, your argument is self-contradictory. If macros are powerful enough to perform arbitrary rewriting of the source (at the level of ASTs), then they necessarily break every static guarantee imposed by the language: unrestricted macros can only decrease static safety.

To make your argument work, you have to restrict your claims about static guarantees to the object language under consideration, and that requires a decision procedure for distinguishing object terms from other terms, in other words, a static type system for the metalanguage. For example, if you are using macros to define a sublanguage which satisfies a static type discipline, then the static guarantees only apply to programs written in that sublanguage, not to the language as a whole. To decide if a particular guarantee holds for a particular term, you need to decide first if it is in that sublanguage or not; that is, you need to assign it a type `meta' or `object', and this already presumes a type system for the metalanguage.

Furthermore, that sublanguage cannot include (unrestricted) macros, since they could be used to subvert the type discipline. The sublanguage could include macros, but they would have to be type-preserving, among other things. To restrict the macro use and definition in the sublanguage to exclude ones which are not type-preserving, you again need a type system in the metalanguage.

So you are back where you started: you need types to get types, and there's no such thing as a free lunch.

Vesa Karvonen - Re: Macros vs. Higher-order Programming  blueArrow
10/29/2003; 4:30:34 AM (reads: 1634, responses: 0)
No, your argument is self-contradictory. If macros are powerful enough to perform arbitrary rewriting of the source (at the level of ASTs), then they necessarily break every static guarantee imposed by the language: unrestricted macros can only decrease static safety.

No, I meant that the macros would essentially restrict the language. In PLT MzScheme, it would basically look something like this:

  (module my-module restricted-hindley-milner-typed-scheme
    ;;         ^                     ^
    ;;   Name of module   The restricted syntax/language

;; ...Code here must conform to the chosen language ;; and if the language is designed properly, then ;; any code here simply can not break any static ;; guarantees. )

With careful, but not very difficult, programming the above can be done today (assuming I haven't misunderstood the module system of MzScheme - you might still need some extra macro wrapping of the code, but that is not important).

Alternatively, without the module system, you could simply do this:

  (restricted-hindley-milner-typed-scheme
   ;; Your code goes here
   )

The above can definitely be done today. The only important point is that the type checking macro must really restrict the allowed code. I have, in fact, previously implemented a simplistic, non-strictly restricting, version of the above kind of macro. I know for a fact that a strictly restricting version can be done.

Vesa Karvonen - Re: Macros vs. Higher-order Programming  blueArrow
10/29/2003; 4:38:49 AM (reads: 1624, responses: 0)
So you are back where you started: you need types to get types, and there's no such thing as a free lunch.

Hey, I never said it was free. ;) What I said is that the syntax-case macro system of [Mz]Scheme is powerful enough for describing static type systems. As far as I know, it is an undeniable fact.

[What you do get for free, in such an implementation of a typed language, are the evaluation rules and the framework for building the language - you don't need to design a complete optimizing compiler to do it.]

Vesa Karvonen - Re: Macros vs. Higher-order Programming  blueArrow
10/29/2003; 5:07:34 AM (reads: 1617, responses: 0)
unrestricted macros can only decrease static safety

Well, I must confess that I'm not highly interested in strictly disallowing any potentially non-terminating programs. I'm more interested in making it possible to describe type systems in an ad hoc fashion inside program code. What I would like to do is something like this:

  (check-with type-system-A
    ;; implementation of some module
   )

(check-with type-system-B ;; implementation of some module )

What I mean above is that the type systems A and B would be chosen, because they provide the most practical type system for checking the code inside the module. Depending on the software being designed, the most practical type system might be a system that guarantees termination or it might a type system that merely checks for some trivial structural typing errors.

Actually, even the above is not enough. I want this to be possible:

  (check-with (specify-type-system
                ;; Formal typing rules of the type system
                )
    ;; implementation of some module
   )

I want anonymous, first-class type systems. First-class means that it must be possible to manipulate type systems in compile-time like first class values in run-time.

I want to make it possible for programmers (software designers) to design and implement type systems in ad hoc fashion. I want to make it possible for programmers to choose different type systems for different parts of a program and still work within the same language framework. If the types produced by the different type systems are compatible enough, then it should be both technically feasible and semantically sound to make cross module calls, for example.

Of course, I'm not intending such a system for programmers who don't understand the type systems they are specifying. I also don't mean that every such type system would have to be formally proven to be sound, because even an unsound type system can sometimes be useful - you just have to understand the potential problems.

Pascal Costanza - Re: Macros vs. Higher-order Programming  blueArrow
10/29/2003; 5:09:35 AM (reads: 1609, responses: 0)
Perhaps I misunderstood what you meant, but it seems to me that you > can basically do the same with higher-order functions. Instead of something like

(bind-resource (<var> <init>)

)

you would have

(call-with-resource (lambda (<var>)

) <init>).

Yes, any single example that you can express with a sufficiently expressive language construct can be expressed by another sufficiently expressive language construct. That's not surprising.

What matters in language design is: What language constructs are convenient to use.

With regard to your proposed solution: This example gives a solution for an extremely simple protocol. (That's why I called it the "Hello, World" of macro programming.)

Now assume your protocol is more complicated, and you need to insert some protocol-related statements between certain calls of the code. With the macro solution you don't need to change anything at the sites that use the macro: the macro can just analyze the code that is being passed to it and perform the necessary transformations.

With the higher-order function approach, you need to split up the arguments into smaller lambda expressions, so that the calling function can do stuff in between. In other words, your abstraction starts to leak.

I know that these things are not unsolvable with higer-order functions. Especially, monads can help you here. But again: Language design is not about the "what", it's about the "how". (If it were only about the "what", we wouldn't need more than Turing machines or assembly language.)

To put it in other words: I don't understand why the fact that higher-order functions, continuations, monads, Common-Lisp-style macros, etc., have some overlap in applicability is a good argument to rule out any single one of these features.

p-eval is a bold example, Pascal. :-)

It only handles a small set of inputs -- much smaller than eval does.

Yes, but p-eval wasn't meant to demonstrate a particularly useful macro, was it? ;)

It was only to show that, in principle, you can inspect and change any part of the code that is being passed to a macro.

(Furthermore, in many cases you can get very far with macros that generate local macros with symbol-macrolet and macrolet, without the need to resort to full code walkers.)

Vesa Karvonen - Re: Macros vs. Higher-order Programming  blueArrow
10/29/2003; 5:33:00 AM (reads: 1603, responses: 0)
I don't understand why the fact that higher-order functions, continuations, monads, Common-Lisp-style macros, etc., have some overlap in applicability is a good argument to rule out any single one of these features.

I don't know what others may think about this issue, but in my opinion, the core of a language should be designed to be minimal in a mathematical sense:

No single core language feature should be expressible as a combination of other core language features.

Everything that can be expressed as combinations should be implemented in libraries rather than specified in the core language.

The reason for this principle is that it greatly simplifies the design of compilers and other tools for the language. It also makes it easier to formally reason about the language.

Of course, I don't mean to be simple minded about the choice of core language features. Principles are just principles.

Now, as I have argued, compile-time macros can not be expressed as a combination of higher-order functions and other run-time constructs. This means that a viable language design, in my opinion, must include a powerful enough compile-time syntactic extension facility.

If a language does not provide a powerful enough syntactic extension facility the following will happen: The day the language is fixed, it becomes obsolete. There will be dozens of incompatible language extensions that could have been more or less easily expressed using a syntactic extension facility. The language slowly dies and will eventually be replaced by a language that basically just has a different syntax (different static/syntactic properties).

In my book, it simply is not a rational choice to design a language completely without any syntactic abstraction facility. Of course, my rationale for language design is longevity. I want to design an invariant (closed) language that can provably live forever without becoming obsolete, because the language is so extensible (open) that any desirable static semantic properties can be achieved within the language using abstraction facilities provided by the language. This is the famous Open-Closed Principle attributed to Bertrand Meyer.

[Yes, I know what you think now. Unless someone else gets there first, I'll promise to prove you wrong in the next ~40 years. :)]

Luke Gorrie - Re: Macros vs. Higher-order Programming  blueArrow
10/29/2003; 6:19:35 AM (reads: 1580, responses: 0)
What an enjoyable discussion :-)

For my part, I'm ready to concede that I can't readily come up with a single example of macros providing clarity of expression beyond what you guys can cook up without them. This is a pleasant surprise. It seems I should have listened to Matthias Felleison to begin with (not that this is what he said!), as Jacob Matthews suggested.

But while we're near the subject, let me mention Edi Weitz's CL-PPCRE (Portable Perl-Compatible Regular Expressions) as an impressive use of macros. By writing a neatly integrated compiler for regular expressions with Lisp macros, he's apparently managed to out-perform Perl! The hack-value alone makes this a program very worthy of... expression :-)

Patrick Logan - Re: Macros vs. Higher-order Programming  blueArrow
10/29/2003; 6:47:31 AM (reads: 1575, responses: 0)
Um, yeah. Back to the drawing board..

The define-recurrence syntax is playing some name binding games. That *is* a valid example that does not exist in Smalltalk.

A better example to look at (but I have to go out the door soon) may be the state machine from LL1's Swine Before Perl (PDF)...

(define (v s)
  ((if (eq? (first s) 'c)
       (automaton init
        (init : (c -> loop))
        (loop : (a -> loop)
              (d -> loop)
              (r -> end))
        (end : (r -> end)))
       (automaton see0
        (see0 : (0 -> see1))
        (see1 : (1 -> see0))))
   s))

Luke Gorrie - Re: Macros vs. Higher-order Programming  blueArrow
10/29/2003; 7:58:56 AM (reads: 1546, responses: 0)
Aren't they overlooking the most obvious syntactic abstraction?

/^c(a|d)+r$/ || /^(0|1)*$/  {exit 0} ; {exit 1}

Or do they consider the verbose sexp version a hip and cool Smart Regexp Syntax, and the concise one Stupid, ugly, evil, and insidious? ;-) [Slides 46,47]

I also wonder why they match strings like "crrrrr" and don't indicate the accepting states. Probably not the reason that springs to my cynical mind.. :-)

Patrick Logan - Re: Macros vs. Higher-order Programming  blueArrow
10/29/2003; 10:49:11 AM (reads: 1475, responses: 0)
Aren't they overlooking the most obvious syntactic abstraction?... I also wonder why they match strings like "crrrrr" and don't indicate the accepting states.

The example is pedantic. Think of a more complex stream of objects that does not have a concise regular expression alternative syntax.

Darius Bacon - Re: Macros vs. Higher-order Programming  blueArrow
10/31/2003; 2:43:57 AM (reads: 1200, responses: 1)
Macros are like dynamic typing: they both help to avoid locking programmers into the scheme of a language. Lisp in the 60s didn't even have WHILE loops, let alone a modern type system. Lisp without macros could have survived till today, but it'd be surviving like Fortran -- successive redesigns sharing a name. Modern Lisp features tended to evolve in the user community with hackers having a bright idea and seeing it get adopted and modified -- like open-source language evolution today only without you needing to grok implementation internals or get patches accepted. So having macros is a political decision about what kind of 'party' you're trying to form. I guess that's true of language design in general, it's just especially clear with this issue.

Lisp macros resemble dynamic typing in another way: they both feel kind of grotty next to beautiful discoveries like Haskell monads or Hindley-Milner types. (There seems to be a bunch of Lisp hackers who actively value that grottiness, as a badge of roll-up-your-sleeves seen-it-all mastery to throw at those academic ivory-tower types.) Anything a programmer can do with macros the language designer can do better with the right built-in language feature... once they know what they want to build in. From progress in language design along with relative stagnation in the Lisp world, this isn't as significant as it once was. But if you're trying to design a language to survive decades, I'd look really hard at the Lisp experience.

About Peter Van Roy's original post, it sounds like a good idea to me. I don't think the proposal quite gets at the essence of Lisp macros, though: s-expressions are not reified ASTs. They're more general than that, like XML only simpler. E.g., the state-machine code Patrick Logan posted isn't a Lisp expression, it's more like a data structure fed to a compiler. (You can of course encode anything into an AST -- the point is, we're encoding stuff into s-expressions, and Lisp ASTs are just one sort of stuff.)

Darius Bacon - Re: Macros vs. Higher-order Programming  blueArrow
10/31/2003; 3:11:04 AM (reads: 1183, responses: 7)
Luke, it's extremely flattering to be mentioned next to Peter Norvig. Geez, how am I going to live up to that? Obviously I recommend his book, too.

I agree that define-recurrence is mostly captured by higher-order functions -- I showed that in the original topic.

There was a Microsoft researcher mentioned on LtU probably last year, with an unimplemented language design that happened to include a kind of syntactic abstraction for binding -- anyone remember that? If I remember right it had concurrency like the join calculus and syntax in the style of C, and it was being put together by functional-programming people.

Ehud Lamm - Re: Macros vs. Higher-order Programming  blueArrow
10/31/2003; 4:17:52 AM (reads: 1173, responses: 0)
The functions-on-AST approach is kinda like what Perl 6 is going to have, right?

andrew cooke - Re: Macros vs. Higher-order Programming  blueArrow
10/31/2003; 5:28:39 AM (reads: 1157, responses: 6)
[off topic site issue]

what's happened to the structure on this site? i'm replying to darius's post, which is the one you should see if you click on the "in response to" link above. yet if i click on that link for his post, i don't get to whatever luke said (and what darius was replying to), but to the root post.

is the site broken (links are being lost)? or is this a usability thing - i read each comment in turn, but maybe others read whole threads and post at the end of the thread?

this isn't intended as cricisim of darius (i just looked at the structure of the whole thread and it's almost completely flat) or of the way people post (if you want to reply to threads rather than articles that's none of my business). i'm just (1) curious what's happening and (2) wondering if it could (should?) be improved, somehow.

Mark Evans - Re: Macros vs. Higher-order Programming  blueArrow
10/31/2003; 6:32:38 AM (reads: 1136, responses: 0)

Daniel: My wishlist for a macro system is: full BNF-like support...

Of course this next is specific to C++, and has been on LtU before.

Spirit Parser Framework

The Spirit framework enables a target grammar to be written exclusively in C++. Inline EBNF grammar specifications can mix freely with other C++ code and, thanks to the generative power of C++ templates, are immediately executable. In retrospect, conventional compiler-compilers or parser-generators have to perform an additional translation step from the source EBNF code to C or C++ code.

scruzia - Re: (side site issue)  blueArrow
10/31/2003; 10:30:58 AM (reads: 1111, responses: 5)
is the site broken (links are being lost)? or is this a usability thing - i read each comment in turn, but maybe others read whole threads and post at the end of the thread?

I strongly suspect that it's usability/UI/performance issues. Especially with the recent significant increase in traffic, I am guessing most people read a whole thread at a time. I suspect that few readers realize that the goofy little Frontier-head is a link that gets you from the "message9532.html#9619" style URL to the "mode=day" place where you can reply to a particular message, and the site's performance is so poor (an improvement from abysmal, but still...) that even those who figure out that cryptic little UI thing tend not to use it.

Ehud Lamm - Re: (side site issue)  blueArrow
10/31/2003; 1:06:31 PM (reads: 1088, responses: 4)
Right. By "goofy little Frontier-head" I guess you mean the skull you see at the bottom of each message, when viewing a whole thread.

Some people read the discussion in daily mode (I have a LtU button in my IE toolbar). Naturally, their replies are tied to specific messages. Others don't realize there are two views of the discussion group, or simply click the discuss links from the home page, thus reading entire threads at a time. This is especailly true with casual readers, and since traffic is up we have more of those than ever before.

I guess I really need to rethink our hosting situation and Manila vs. MT...

I apologize for not doing anything about these issues, but like everyone else I have a day job. I'll try to find the time to solve these things soon. But no promises: I have some work issues right now, and I am not sure how they are going to be resolved.

scruzia - Re: (side site issue)  blueArrow
10/31/2003; 2:07:20 PM (reads: 1081, responses: 0)
Yeah, the cow skull. Perhaps in the meantime, a simple explanation in the FAQ of "how to use the LtU" UI, might help. Or maybe the existence of this splinter thread here will help for a while.

I guess I really need to rethink our hosting situation ...

Not a big deal, IMHO. People learn. It works for now. And by the way thanks (to Ehud and everyone else) for providing such a wealth of interesting, educational, informative, opinionated, and literate conversation.

and Manila vs. MT...

... vs. a semi-moderated NNTP newsgroup. Maybe "alt.lambda-the-ultimate"?

Matt Hellige - Re: (side site issue)  blueArrow
10/31/2003; 2:31:53 PM (reads: 1090, responses: 2)
I guess I really need to rethink our hosting situation and Manila vs. MT...

I apologize for not doing anything about these issues, but like everyone else I have a day job. I'll try to find the time to solve these things soon. But no promises: I have some work issues right now, and I am not sure how they are going to be resolved.

I'm sure I speak for everyone when I say that we really appreciate LtU and all the work you've done and continue to do. We may whine about this or that from time to time, but let's be clear that our Manila and hosting complaints are never directed at you.

Everybody, now: "Thanks, Ehud!"

andrew cooke - Re: (side site issue)  blueArrow
10/31/2003; 6:03:49 PM (reads: 1057, responses: 0)
i don't think anything needs changing (seems faster than it used to be!). if anyone does want to see the post-by-post listing i use this link. and yes, of course, thanks ehud (and good luck with the "issues").

Frank Atanassow - Re: Macros vs. Higher-order Programming  blueArrow
11/1/2003; 6:45:26 AM (reads: 1007, responses: 1)
Vesa: What I said is that the syntax-case macro system of [Mz]Scheme is powerful enough for describing static type systems... What you do get for free, in such an implementation of a typed language, are the evaluation rules and the framework for building the language - you don't need to design a complete optimizing compiler to do it.

I don't see a big difference between your vision of embedding a statically typed language in Scheme via macros, and an external typechecking (or program transformation) tool which you can run at the command-line. I see one advantage to the macro approach: it obviates the need to deal with the computing environment---manipulating source files and so on---but that's pretty minor. It's enough of an advantage that I think having macros is better than not having them, but it's certainly not the second coming.

BTW, if you subscribe to Peter van Roy's notion of expressivity, namely that one language is more expressive than another if it lets you localize global effects, than in fact it is inconsistent to claim that Scheme macros increase expressivity, since Scheme macros are `hygienic' and lexically scoped, that is, only influence programs locally. Where's the beef?

Incidentally, (the inverse of) this notion of expressivity is called macro-definability in the literature. That is, a feature does not add expressivity if it is macro-definable. :)

I want to make it possible for programmers (software designers) to design and implement type systems in ad hoc fashion.

If you left off the "ad hoc", I would say that that would be interesting. But what you've described so far has nothing to do with type systems per se, only abstract syntax trees.

To make your vision a reality, you first need to ask yourself, "what distinguishes a type system from something which is not a type system, that is, what are the defining characteristics of a type system?" IMO, a type system is a datatype of types, together with an AST datatype indexed by those types according to a typing relation, and a notion of a substitution which preserves typing. Your proposal only addresses the AST part; if it addressed the rest, it would be quite something.

If a language does not provide a powerful enough syntactic extension facility the following will happen: The day the language is fixed, it becomes obsolete. There will be dozens of incompatible language extensions that could have been more or less easily expressed using a syntactic extension facility. The language slowly dies and will eventually be replaced by a language that basically just has a different syntax (different static/syntactic properties).

Don't you think you're overstating your case? C is a popular language which has evolved hardly at all during its lifetime, and I don't think you would regard C macros as a `powerful enough syntactic extension facility'. Java is a widely used language which has no macros at all. Admittedly, Java is `growing', as Steele would say, but doesn't this just mean a language should keep evolving?

Furthermore, since macros can only have local effects, while altering a language can add features which have non-local effects, it seems to me that macros/syntactic extension are/is no substitute for a living language.

Finally, saying that syntactic extension can obviate the problem of "dozens of incompatible language extensions that could have been more or less easily expressed using a syntactic extension facility" is tautological, isn't it? It's like saying a hammer can solve any problem a hammer can solve. What about the language extensions that actually increase expressivity (are not macro-definable)?

Sjoerd Visscher - Re: Macros vs. Higher-order Programming  blueArrow
11/1/2003; 8:51:21 AM (reads: 1006, responses: 0)
So what you actually want is macros that aren't invoked, but that are automatically applied to matching structures. Maybe XSL like?

Ehud Lamm - Re: (side site issue)  blueArrow
11/1/2003; 9:43:22 AM (reads: 1036, responses: 0)
Thanks. That's always nice to hear...

Vesa Karvonen - Re: Macros vs. Higher-order Programming  blueArrow
11/1/2003; 12:07:30 PM (reads: 994, responses: 0)
I don't see a big difference between your vision of embedding a statically typed language in Scheme via macros, and an external typechecking (or program transformation) tool which you can run at the command-line.

Yes, theoretically you can do anything you want with an external program verification/transformation [compiler] tool. That is not what I want, however.

The practical problem is this. Library A uses the external tool TA to generate code. Library B uses the tool TB. I want to use both libraries. Can I use both TA and TB safely together [in a single module]? Which one should I run first? How much work is there to install and configure all the tools for a particular project, etc...

If the syntactic abstraction [I'm not talking about just type systems] facility is provided by the language, and it is designed to allow non-trivial extensions to co-exist, all these practical problems just basically go away.

Another issue is that external tools reinvent 90% of the required machinery: parsing, semantic analysis, etc... It is not very intelligent to keep reinventing the wheel.

if you subscribe to Peter van Roy's notion of expressivity

My short answer: I don't subscribe to such a simplistic notion of expressivity. Locality is not everything. Reduction of duplication of any kind is one very important aspect in practice that is not addressed by a simplistic notion of locality.

If you left off the "ad hoc"

English is not my native language, but as far as I understand the term "ad hoc", it means that something is designed for a particular purpose without specifically addressing more general needs. An "ad hoc design" is not necessarily bad, but it seems that the term "ad hoc" is often, incorrectly, used as a synonym for bad.

One of the reasons I would like to be able to specify type systems in ad hoc fashion is that I believe that it would greatly speed up the development of the field. Look at mathematics, for instance. It has taken hundreds of years for mathematicians to develop many general theories. At first many of these theories (or the roots or early versions of the theories) were much more specific (ad hoc) than they are today. I think that it is more natural and more economical to gradually generalize from specific examples than to shoot directly for the most general. The problem is that a designer can not see infinitely far to the future and generalize his designs to suit each and every future need.

I certainly do not expect that every program would have its own type system. I would expect that in the future, a language framework comes with a dozen or so type systems that are designed to suit different purposes. I do not believe that it makes sense to create a one "perfect" type system with 50 pages worth of rules. I believe that it makes sense to design relatively small but "advanced" type systems that address somewhat more specific needs such as scientific computation.

Don't you think you're overstating your case? C is a popular language which has evolved hardly at all during its lifetime,

Consider the relationship between C and C++. Then consider the relationship between Common Lisp and CLOS. This is not the only example. There are quite a few C-based languages that could have been "just libraries" if C would have had a powerful enough syntactic extension facility.

Even tools like lex/flex and yacc/bison fall into this category. In Scheme those tools are libraries. The macro system is used for performing the code generation. [I must confess that I'm not particularly excited about combinator parsing. A good parser generator that uses a BNF style structure for the grammar, seems to yield a much more understandable program in most cases.]

I don't think you would regard C macros as a `powerful enough syntactic extension facility'.

C macros are actually practically Turing-complete, but I agree that they are not powerful enough, because they can not generally destructure the syntax of C.

Java is a widely used language which has no macros at all.

And look at all the dialects of Java. For the past few months, there has hardly been a week that some Java dialect, other than "standard Java", wouldn't have been discussed at LtU. [Ok. Perhaps not almost every week, but there are probably hundreds of incompatible Java-dialects out there.]

Just look at any widely used language that doesn't provide a syntactic abstraction facility. What do you see?

doesn't this just mean a language should keep evolving?

In my opinion, a language should be as invariant as possible and changed only if there is a flaw in the specification. On the other hand, new libraries should be developed as new needs arise. I want to be able to implement real programs today and I want a guarantee that I don't have to rewrite those programs tomorrow.

saying that syntactic extension can obviate the problem of "dozens of incompatible language extensions that could have been more or less easily expressed using a syntactic extension facility" is tautological, isn't it? It's like saying a hammer can solve any problem a hammer can solve. What about the language extensions that actually increase expressivity (are not macro-definable)?

What if a feature is "macro-definable", but the language doesn't provide macros? This is the situation that I'm referring to. You did not discover a flaw in my argument.

Second, I challenge you to specify language features that are not "macro-definable" using syntax-case macros. The difficulty here is that it is possible to do arbitrary compilation from a representation of the abstract syntax of a source language to Scheme. [Actually, I think that in practice there are [useful] things that can not be easily expressed using something like syntax-case macros - otherwise I would be happy to just use Scheme.]

Frank Atanassow - Re: Macros vs. Higher-order Programming  blueArrow
11/2/2003; 7:49:55 AM (reads: 930, responses: 1)
Sjoerd: So what you actually want is macros that aren't invoked, but that are automatically applied to matching structures. Maybe XSL like?

No, just the opposite. I don't have much interest in expanding the capabilities of macros. What I `want' are semantic features that can accomplish most of what can now only be done by metaprogramming, i.e., external tools and macros.

Vesa: The practical problem is this. Library A uses the external tool TA to generate code. Library B uses the tool TB. I want to use both libraries. Can I use both TA and TB safely together [in a single module]? Which one should I run first? How much work is there to install and configure all the tools for a particular project, etc...

If the syntactic abstraction [I'm not talking about just type systems] facility is provided by the language, and it is designed to allow non-trivial extensions to co-exist, all these practical problems just basically go away.

You are talking about syntax and code generation and external tools, but these things are not intrinsically related to the problem you describe, which is rather summarized by the phrase I emboldened above. I'm not sure what exactly you intend by the word `co-exist', but it sounds like the problem of how to ensure, given two confluent rewriting systems, that their union is confluent. This is a semantic issue, and does not magically get solved by macros.

Another issue is that external tools reinvent 90% of the required machinery: parsing, semantic analysis, etc...

Not if the compiler makes its functionality available as libraries which can be reused to write other program transformation tools.

I don't subscribe to such a simplistic notion of expressivity. Locality is not everything. Reduction of duplication of any kind is one very important aspect in practice that is not addressed by a simplistic notion of locality.

Modularity (= locality) is not everything, I agree, and there are other important notions of expressivity. Nevertheless, modularity is an important measure of expressivity. The reason languages with first-class functions and first-class continuations are easier to program in than those without is because those features increase modularity.

I am surprised, though, that you think `reduction of duplication' is not addressed by modularity; IMO, that is precisely what is being addressed. For example, a language that lets me factor out a bit of code to a reusable procedure allows me to avoid duplication, since it lets me put a procedure call everywhere I would otherwise have had to inline the procedure body.

One of the reasons I would like to be able to specify type systems in ad hoc fashion is that I believe that it would greatly speed up the development of the field.

Er, yeah. It might actually help, before you try making your contribution to the field of type theory, if you ensured that you were actually producing type systems (as I mentioned in my last post). The availability of macros does not facilitate contributions to type theory any more than the availability of arithmetic facilitates contributions to number theory, or the availability of strings facilitates contributions to formal language theory.

The problem is that a designer can not see infinitely far to the future and generalize his designs to suit each and every future need.

Yes, but as I have pointed out, twice already, macros do not solve this problem; they only simplify some relatively minor interfacing issues which would otherwise arise if program transformation were implemented by an external tool.

Nothing will ever solve this problem in general; at best we can add a few features which parametrize the language to more easily handle a larger range of problems. The language itself is still constrained to be the union of all ways of instantiating those parameters. This is, I think, a fundamental limitation of any formal system.

Consider the relationship between C and C++. Then consider the relationship between Common Lisp and CLOS.

What about them? Are you saying that the fact that Lisp supports macros has anything intrinsically important to do with CLOS? The difference in effort between implementing CLOS in Common Lisp with macros, and implementing a compiler which compiles Common Lisp + CLOS to Common Lisp is almost negligible when compared to the effort that was required to design and implement CLOS with or without macros.

Are you understanding me yet? The bulk of the problem of a language extension is not solved by macros, and the difference that availability of macros makes is small. Macros do not facilitate unforeseen extensions in any significant sense.

Put another way, if the difference in effort between implementing a feature as an external tool versus implementing it via macros is significant compared to the effort that goes into implementing (and designing) the extension itself, then the extension is not likely to be very useful. Put yet another way, any programmer who regards the implementation of an external program transformation tool as a great obstacle is intellectually incapable of implementing a useful language extension.

For the past few months, there has hardly been a week that some Java dialect, other than "standard Java", wouldn't have been discussed at LtU.

This only suggests that either the Java designers did not design their language well enough, or that the Java programmers have not been educated well enough in how to properly use the language.

Just look at any widely used language that doesn't provide a syntactic abstraction facility. What do you see?

I see programmers solving real problems rather than pontificating about how to design a more `natural' syntax or designing half-baked language extensions which neither solve real problems nor facilitate solving them.

What if a feature is "macro-definable", but the language doesn't provide macros?

First, if your language doesn't support higher-order functions, and the feature can be implemented using them, then use a different language. Otherwise, if it is really, honestly speed-critical, or there is some other compelling reason not to use higher-order functions to implement it, then you are welcome to use macros.

Second, I challenge you to specify language features that are not "macro-definable" using syntax-case macros.

Products, sums, recursive datatypes, higher-order functions, call/cc, state, nondeterminism, laziness, strictness, static typing... shall I go on?

Let me anticipate your response. You will say these these things can be implemented by surrounding your entire program with a macro. But that is a global transformation. I should have mentioned this before w.r.t. static typing; it's not macro-definable since you cannot define a macro which performs static typechecking locally, since the type of a term depends on the types of free variables in its environment.

The difficulty here is that it is possible to do arbitrary compilation from a representation of the abstract syntax of a source language to Scheme.

That's irrelevant. What's relevant is that you have to transform the entire program, not just the part that uses the feature; that is, the transformation is global, not local---it's not modular.

Marc Hamann - Re: Macros vs. Higher-order Programming  blueArrow
11/2/2003; 8:46:29 AM (reads: 928, responses: 0)
This only suggests that either the Java designers did not design their language well enough, or that the Java programmers have not been educated well enough in how to properly use the language.

Another possibility (and the most likely to my mind) is that academics interested in PL features are simply using the currently popular language with widely available tools.

To my knowledge, none of the variants has any wide-spread popularity or use in the Java community at large, which suggests that the "failings" of Java addressed by these variants are not that worrisome in current practice and perception.

Isaac Gouy - Re: Macros vs. Higher-order Programming  blueArrow
11/2/2003; 10:01:09 AM (reads: 915, responses: 3)
simply using the currently popular language
That seems reasonable (not that it excludes the other claims).

none of the variants has any wide-spread popularity
Hmmm - once upon a time GJ was one such variant.

Vesa Karvonen - Re: Macros vs. Higher-order Programming  blueArrow
11/2/2003; 10:48:29 AM (reads: 921, responses: 0)
This is a semantic issue, and does not magically get solved by macros.

Sorry, I was not clear enough. I don't assume that macros make the compatibility issues to completely go away - on the implementation side. What I mean is that the user of libraries that are designed inside a language framework that allows "language extensions" to be implemented in a way that they can easily be made to be compatible would't have to worry about the issues.

Also, I can see that you like to limit the discussion to macros. Please note that I'm not just talking about macros. Don't try to put words into my mouth - it will backfire.

before you try making your contribution to the field of type theory

Sorry, I have not been clear enough. I'm not particularly interested in type theory. I believe I understand the basics of type theory well enough to be able to confidently say, for example, that there will never be a single perfect type system for all uses.

However, I'm interested in metaprogramming. I'm interested in the possibility of using a metaprogramming facility provided by an extendable, but invariant, language framework to create a statically typed "embedded language" that integrates extremely well with the language framework and other similar embedded languages and language extension.

Second, in case you are interested, the simplistic type checking system (a couple of macros and the simplistic type inference algorithm) I implemented in Scheme does indeed define a type datatype. For example, for reasons of simplicity, types are externally represented using a simple list based structure. For example, the polymorphic identity function has the type

(-> 0 0)

and the polymorphic map-function has the type:

(-> (list 1) (-> 1 0) (list 0))
.

The (positive integer) numbers represent bound type variables. The symbols -> and list are type constructors.

Not if the compiler makes its functionality available as libraries which can be reused to write other program transformation tools.

If I interpret your suggestion simplistically, like you tend to do to my ideas, then your suggestion, which has been known to me for years, doesn't come even close to solving the compatibility issues between separately developed language extensions. For example, it doesn't allow a language extension to "reason about" a special construct, whose semantics are defined by some other separately developed language extension.

In summary, I see no way that external mechanisms would yield the most modular, most elegant results. The extension facility must be an integrated feature of the language framework. Unless you can point to an example that, in your opinion, yields the most modular language extension framework and uses external tools and has demonstrably been used to define several dozen independently develoved language extensions, then it will be next to impossible for you to convince me, on this issue, about the validity of your viewpoint.

I am surprised, though, that you think `reduction of duplication' is not addressed by modularity[modularity/higher-order functions]

One of the problems is that when you use higher-order functions, all bindings need to be explicit. So, assuming that you would define a parsing function that would take a datatype that describes the syntax to be parsed and would contain slots for higher-order functions to implement the semantic actions, then each of those semantic actions would need to define repeated parameter bindings.

Are you understanding me yet?

Yes, I think I understand your point. It doesn't mean that I would agree with you. Imagine, for a moment, that instead of embedding CLOS into Common Lisp using macros, CLOS would have been implemented by defining a completely new incompatible dialect of Common Lisp. If you don't understand what I'm getting at here, then I don't think that we need to discuss issues that relate to the topic of "Defining a new language vs Defining a new extension library", because we simply do not agree on the issue. The way I see it, it does not make sense to create a new language [dialect] each time a simple language extension is desired. It is stupid, naive, wasteful, reinventing of the wheel.

As an aside, I hope that you understand that I'm actually pretty competent in using higher-order techniques and macros. I know very well when something can easily be implemented using higher-order procedures.

This only suggests that either the Java designers did not design their language well enough, or that the Java programmers have not been educated well enough in how to properly use the language.

I would agree on both counts.

I see programmers solving real problems rather than pontificating about how to design a more `natural' syntax or designing half-baked language extensions which neither solve real problems nor facilitate solving them.

hmm... I'm not sure how to interpret that. Perhaps I should take it as a personal insult that intends to imply that because you don't agree with my idea of an extendable language, then I must be a bad designer that complains about his tools because he doesn't know how to use them.

If you are interested in seeing the kind of designs that I produce, then I'd instruct you to take a look at an ongoing school project whose JavaDoc documentation is available here. (I've returned to school to complete my degree and continue as a researcher after 6 years of working in the industry. I have about 13 years of experience in programming (at the time of writing this). Of course, the last ~10 years have been much more intensive than the first couple of years.) The following packages might be particularly interesting to LtU readers:

- Framework for functional objects
- Higher-order procedures in Java
- Template methods for Graph algorithms
Feel free to e-mail critique relating to the software to me privately - or post it publicly if you want. Do note that the choice of Java was an external requirement. I would not have personally chosen Java.

Products, sums, recursive datatypes

Tautological, dear Watson.

Perhaps I don't understand what you are talking about here, but if you are talking about the same things that the above terms describe to me, then I have to disagree. Things such as variants and pattern matching can rather easily be described using syntax-case macros.

As an aside, I have often observed that many "naive, brainwashed, static typers" confuse static typing with the ability to design data representations in certain ways. When I first learned Ocaml, it also first seemed to me that variants and pattern matching had something to do with static typing. Shortly, however, I understood that the two issues are ortohogonal.

higher-order functions

I would also add lambda-expressions, or anonymous procedures, to the above.

Now, I think that this can be done using a reasonably modular (macro) transformation. Of course, if the base language doesn't support any way to invoke a function indirectly, the task becomes much more difficult. However, I think such a requirement is not very realistic.

Assume that we would be working in Scheme, except that it would only support the definition of top-level functions and would also have special syntax for calling functions through "function pointers". We could then implement higher-order procedures, lexical binding and anonymous procedures using a macro system such as the syntax-case macro system.

The basic idea is to either redefine the top-level form for definitions or, if redefinition is not possible, to provide an alternative form for top-level definitions that transforms the definitions in the following ways:

- Transforms the code to perform closure creation. A closure
  holds the arity of the procedure, the pointer to the procedure
  and the parameters to the procedure.
- Replaces calls to [functional] values by code that dispatches
  to a closure.
- Performs lambda-lifting.
Of course, we would then have to use the special macro for top-level definitions to be able to use higher-order procedures and lambda-expressions. Code that would not be written using the special top-level definition macros would have to use special protocols for procedure calls, but it would be technically possible. Code using the new top-level definition forms could also call code that doesn't use the new top-level definition forms.

call/cc

This can be done using a similar, reasonably modular transformation as previously explained. The only essential requirement is that the core language must perform tail-call optimization. Otherwise the task becomes much more complicated. [Think trampolining transformation.]

state

I'm not sure what you mean by this. Please explain this more carefully. Are you talking about a context in which the core language would be purely functional? If so, then you must be aware of the fact that it is indeed possible to simulate stateful programming in a purely functional language, but it is not particularly efficient. [Think interpretation.]

nondeterminism, laziness, strictness, static typing...

All of these could be done reasonably modularly using techniques similar to the ones I described above.

shall I go on?

Please do. So far you have not provided any examples that I have not already thought about before (except perhaps "state", because I'm not certain what you mean by it), and I'd definitely like to hear about issues that I haven't thought about, because they may turn out to be important.

Please note that I'm not claiming that by designing a hopelessly brainfucked, but technically Turing-complete, core language and then attaching a powerful syntactic abstraction system on top of the idiotic core language would be a good way to design the kind of language framework I'm thinking about.

I haven't, of course, yet designed the framework, but at the moment it would seem to that the minimal core language I would use:

- would offer O(1) time imperative updates,
- would have full (syntactic) tail-call optimization,
- would probably support partial application of procedures,
- would offer both dynamically checked and unchecked primitive
  operators.
Of course, the above list isn't complete or final - don't interpret it as such.

Let me anticipate your response. You will say these these things can be implemented by surrounding your entire program with a macro.

Good. I can see that we agree here. This is the exact reason why I think that a simple macro system may not be good enough. On the other hand, I can already see reasonably modular techniques for implementing these things using a macro system similar to the syntax-case macro system of Scheme. However, I can also imagine the benefits of having a fundamentally syntactic abstraction mechanism, or a compile-time metaprogramming facility, or a integrated program transformation system, that would indeed allow such things to be implemented highly modularly.

Marc Hamann - Re: Macros vs. Higher-order Programming  blueArrow
11/2/2003; 1:56:26 PM (reads: 892, responses: 2)
That seems reasonable (not that it excludes the other claims).

It doesn't exclude them, but I don't think it provides evidence for them either.

once upon a time GJ was one such variant

Maybe I don't get out much, but even though (as a fan of Phil Wadler and of generics) I've been aware of GJ for a long time, I haven't known anybody who has used it in a production environment.

In fact, most hard-core Javatistas I know have never HEARD of GJ (or any other variants).

Pretty much a PLT geek phenomenon as far as I can tell.

andrew cooke - Re: Macros vs. Higher-order Programming  blueArrow
11/2/2003; 1:57:52 PM (reads: 898, responses: 1)
i think the point was that GJ is now java 1.5 (in broad terms).

Marc Hamann - Re: Macros vs. Higher-order Programming  blueArrow
11/2/2003; 2:11:17 PM (reads: 895, responses: 0)
i think the point was that GJ is now java 1.5 (in broad terms).

If a variant introduces a new feature that gets picked up by the standard, I don't know that this validates the utility of all variants.

Personally, I think the lack of generics WAS a design flaw in Java, because it required users to violate the type system for common, necessary constructs. (e.g. containers)

Sadly, many people now think passing around instances of type Object and then typecasting is a virtue of Java instead of a fault.

So even in this case, in spite of the fact that I happen to favour this new feature, I don't think that it is actually "in demand" for most of the Java community.

Vesa Karvonen - Re: Macros vs. Higher-order Programming  blueArrow
11/2/2003; 2:21:16 PM (reads: 880, responses: 0)
Personally, I think the lack of generics WAS a design flaw in Java, because it required users to violate the type system for common, necessary constructs. (e.g. containers)

I totally agree with this. See Type safe maps for convenience. It is really boring to continuously perform casting in Java. [I should probably write a small script to generate type safe containers.]

Of course, this isn't the only design flaw of Java. Another flaw that was quite recently discussed at LtU is lack of complete lexical binding. [The simplistic implementation, that simply copies the references and requires captured reference and primitive value bindings to be final, is not good enough in my book.]

Yet another flaw was that the language did not support first class procedures. I just don't get it how someone familiar with functional programming (AFAIK) could go and design a language that doesn't support first class procedures.

All of these would be non-issues to me if the language would have provided a [powerful enough] syntactic extension facility as I could have easily provided each of these as simple syntactic extensions.

Isaac Gouy - Re: Macros vs. Higher-order Programming  blueArrow
11/2/2003; 6:21:23 PM (reads: 853, responses: 3)
"in demand" for most of the Java community
Heaven knows what's in-demand for most... maybe a look through the submissions to the JCP would give an indication.

We might guess that most are busy trying to get their work done without worrying about what may appear in Java X years from now.

none of the variants has any wide-spread popularity or use in the Java community at large
Isn't commoditization of programmers the virtue of Java? What would be the point of using a Java variant unblessed by Sun?

Marc Hamann - Re: Macros vs. Higher-order Programming  blueArrow
11/2/2003; 6:56:05 PM (reads: 855, responses: 2)
maybe a look through the submissions to the JCP would give an indication.

More likely that would tell us what the tool building companies want...

Isn't commoditization of programmers the virtue of Java? What would be the point of using a Java variant unblessed by Sun?

So Isaac, I've lost track. Are you agreeing with me that the plenitude of Java variants doesn't reflect that much on the practical merits of the language, or not? ;-)

Vesa Karvonen - Re: Macros vs. Higher-order Programming  blueArrow
11/2/2003; 7:40:25 PM (reads: 848, responses: 0)
Put yet another way, any programmer who regards the implementation of an external program transformation tool as a great obstacle is intellectually incapable of implementing a useful language extension.

I guess I missed this funny remark on the first reading. I must say that any programmer that

- doesn't see the duplication of effort that goes into producing simple
  language extension as external program transformation tools, or
- doesn't see how the use of external transformation tools merely
  transfers the integration problems to the [language and tool] users
should buy thicker glasses and look around. Perhaps he should even stop to think for a second or two.

Really, I don't think that it is difficult to make a simple language extension as an "X-front". I'm pretty sure anyone with half a brain could do it, because I've seen plenty of half brained tools like that. The problems start when you need to integrate with other similarly developed language extensions or when you need to distribute a library whose use relies on such a language extension.

If I were developing something for my own use, and for some reason I would have to use some really badly designed language X, then I might just implement an X-front. On the other hand, if I were developing a library, supposed to be widely usable, in language X, it would severely limit the [re]usability of the library if it would rely on some external code transformation tool. A "solution" to use external tools to enhance a language simply doesn't scale.

Frank Atanassow - Re: Macros vs. Higher-order Programming  blueArrow
11/3/2003; 7:03:27 AM (reads: 823, responses: 1)
First, Vesa, excuse me for getting so irate.

I don't assume that macros make the compatibility issues to completely go away - on the implementation side. What I mean is that the user of libraries that are designed inside a language framework that allows "language extensions" to be implemented in a way that they can easily be made to be compatible would't have to worry about the issues.

This is again tautological, isn't it? A language framework that allows language extension to be compatible with each other would allow them to be compatible with each other.

OK, I'm happy that you acknowledge that macros do not address modularity problems with language extension. But, what I have been trying to extract from you, and you have not been providing, is a concrete description of a mechanism which would remedy those problems.

Also, I can see that you like to limit the discussion to macros. Please note that I'm not just talking about macros. Don't try to put words into my mouth - it will backfire.

Sorry, I was under the impression that we were discussing the usefulness of macros for metaprogramming. What were you discussing?

I'm interested in the possibility of using a metaprogramming facility provided by an extendable, but invariant, language framework to create a statically typed "embedded language" that integrates extremely well with the language framework and other similar embedded languages and language extension.

OK, describe your metaprogramming facility and how it integrates extremely well.

Second, in case you are interested, the simplistic type checking system (a couple of macros and the simplistic type inference algorithm) I implemented in Scheme does indeed define a type datatype.

I am not interested in a particular type datatype, or indeed a particular type system. What I am interested in, and what I feel is the crux of the problem is: given an arbitrary (i.e., user-defined) datatype of terms, an arbitrary datatype of types, how does the user specify the typing relation and notion of substitution on terms, and, given those four things, how does your metaprogramming facility guarantee that substitution on terms preserves typing?

[exporting compiler functionality in libraries] doesn't come even close to solving the compatibility issues between separately developed language extensions.

Indeed, it doesn't, and I did not claim it does. I was only refuting to your claim that a language extension implemented as an external tool necessarily duplicates code use in the compiler.

For example, it doesn't allow a language extension to "reason about" a special construct, whose semantics are defined by some other separately developed language extension.

Right, and neither do macros. How does your metaprogramming facility handle this problem?

In summary, I see no way that external mechanisms would yield the most modular, most elegant results.

I agree and I never claimed otherwise. I guess you felt I was advocating implementing language extensions as external tools, but that isn't the case.

The only reason I mention them is to illustrate that macros are not very useful, because the difference between implementing an extension as a macro and as an external tool is small. Small, but not zero; and, though I would not dismiss a language because it does not include macros, I have even said that I think having macros is probably better than not.

One of the problems is that when you use higher-order functions, all bindings need to be explicit. So, assuming that you would define a parsing function that would take a datatype that describes the syntax to be parsed and would contain slots for higher-order functions to implement the semantic actions, then each of those semantic actions would need to define repeated parameter bindings.

I don't understand this. By `binding', do you mean argument? And if so, why is this a problem?

The way I see it, it does not make sense to create a new language [dialect] each time a simple language extension is desired.

I think that by `language [dialect]', you mean `language [dialect] implementation' since, by definition, a language extension defines a new dialect. If so, I agree. And I've argued that macros do not provide any advantage over external tools in this respect. So what I would like to know, again, is how you propose to address this issue.

Perhaps I should take it as a personal insult that intends to imply that because you don't agree with my idea of an extendable language, then I must be a bad designer that complains about his tools because he doesn't know how to use them.

No, sorry, it was not meant as an insult. It's not that I disagree with the idea of an extendable language, per se. Rather I disagree with the claim that macros provide any significant facilities for language extension. I feel that few people perceive this fact; when they see an extension implemented as a macro, they ascribe too much value to the macro mechanism which, in fact, plays hardly any significant role.

And, yes, in addition I feel that there are lots of frivolous uses of macros which could be better addressed with semantic mechanisms.

I wrote: Products, sums, recursive datatypes [are not macro-definable]

Vesa wrote: [in reference to `datatypes'] Tautological, dear Watson.

How? I am not assuming static typing; if we're talking about an untyped language, then by `products' I mean tupling and projection, by `sums' I mean injection and case, and by `recursive datatypes' I mean construction and fold.

But, for a different reason, you are right that what I wrote is tautological in a sense because I wasn't clear about `macro-definable'.

Macro-definability is defined w.r.t. a language. So, for example, call/cc is not macro-definable in (Scheme minus call/cc): first-class continuations have be added as a primitive. So when I say, for example, that higher-order functions are not macro-definable, I'm implicitly referring to a first-order language: you need HOFs to get HOFs. In contrast, let-expressions are macro-definable in a language with HOFs, but not in a first-order language.

Your `challenge', you will recall, was:

I challenge you to specify language features that are not "macro-definable" using syntax-case macros.

I should have written:

If you mean macro-definable w.r.t. to Scheme then statically type inferenced terms and nondeterminism (say, amb) are examples. If you mean macro-definable w.r.t. to, say, Pascal extended with syntax-case-style macros, then HOFs. Products, sums, recursive datatypes, higher-order functions, call/cc, state, nondeterminism, laziness and strictness are all examples of features which cannot be recovered via macros if a language does not already in a semantical sense support them. So they form a way of classifying programming languages.

Things such as variants and pattern matching can rather easily be described using syntax-case macros.

Variants (sums) are macro-definable in Scheme because the language already effectively supports sums because it has an if-then-else construct. Pattern-matching (products and sums) is macro-definable in Scheme because it already effectively supports products and sums because it has cons and nil and if-then-else (and recursion).

As an aside, I have often observed that many "naive, brainwashed, static typers" confuse static typing with the ability to design data representations in certain ways. When I first learned Ocaml, it also first seemed to me that variants and pattern matching had something to do with static typing. Shortly, however, I understood that the two issues are ortohogonal.

I have often observed that many ignorant people believe everyone to be as ignorant as themselves...

I would also add lambda-expressions, or anonymous procedures, to the above [higher-order functions]

You're joking, right? I have the most incredible sinking feeling...

OK, I'm sorry, but I'm going to wave the white flag now. It was wrong of me to get into this conversation; I should have kept my mouth shut.

You're right. I was wrong. Your vision of metaprogramming via macros is the best thing since sliced bread and you have conclusively demonstrated its value.

Ehud Lamm - Re: Macros vs. Higher-order Programming  blueArrow
11/3/2003; 7:37:59 AM (reads: 816, responses: 0)
Please, people, no insults...

Isaac Gouy - Re: Macros vs. Higher-order Programming  blueArrow
11/3/2003; 1:34:37 PM (reads: 798, responses: 1)
Marc: Are you agreeing with me...
No way! ;-)

It seems that we can imagine several reasons for developing Java variants. In at least one case (GJ) we can see that the reason was to "fix" a problem with the language. Blessing that "fix" has involved Sun in a good deal of rework. We could interpret that as a reflection on the practical merits of Java before 1.5

My mild objection was to the assertion that the lack of wide-spread usage of Java variants, suggested lack of concern with the "failings" that the variants fixed. We know the perceived value of Java is in standardization. That the community-at-large tolerates current "failings" says little about how worrisome they find the "failings" but a good deal about how much they value standardization.

Marc Hamann - Re: Macros vs. Higher-order Programming  blueArrow
11/3/2003; 4:20:24 PM (reads: 800, responses: 0)
My mild objection was to the assertion that the lack of wide-spread usage of Java variants, suggested lack of concern with the "failings" that the variants fixed.

I think we actually do agree. ;-)

My position was that the abundance of Java variants tells us nothing, one way or the other, about the perceptions of failings of Java. (I may have offered the counter-argument to show that it could just as easily be made as the original poster's contention, but it wasn't my actual position.)

Darius Bacon - Re: Macros vs. Higher-order Programming  blueArrow
11/5/2003; 2:20:07 AM (reads: 694, responses: 0)
Put another way, if the difference in effort between implementing a feature as an external tool versus implementing it via macros is significant compared to the effort that goes into implementing (and designing) the extension itself, then the extension is not likely to be very useful. Put yet another way, any programmer who regards the implementation of an external program transformation tool as a great obstacle is intellectually incapable of implementing a useful language extension.

Peter Deutsch's Ghostscript interpreter in C was hand-written in trampolined style in order to allow C and Ghostscript code to call each other freely. The resulting low-level style of code is painful to follow. Luke Gorrie's distel package for Emacs Lisp also uses trampolined style, for calls between elisp and Erlang, but it wraps it up in macros and is much pleasanter reading. While there are more reasons for this than the macros -- Lisp is just a higher-level language -- they do make a practical difference.

Is Peter Deutsch, one of the original Lisp and Smalltalk hackers, intellectually incapable of doing better? It's a lot easier for me to believe in some combination of the external tools being too much work when he just wanted to make a product, and not wanting to fight the C culture where language extensions aren't the norm.

Vesa Karvonen - Re: Macros vs. Higher-order Programming  blueArrow
11/5/2003; 4:27:16 AM (reads: 685, responses: 2)
Vesa: "I don't assume that macros make the compatibility issues to completely go away - on the implementation side. What I mean is that the user of libraries that are designed inside a language framework that allows "language extensions" to be implemented in a way that they can easily be made to be compatible would't have to worry about the issues."

Frank: "This is again tautological, isn't it? A language framework that allows language extension to be compatible with each other would allow them to be compatible with each other."

If you refer to the bit "a language framework that allows "language extensions" to be implemented in a way that they can easily be made to be compatible", then I agree that it is badly phrased. However, it doesn't "twist" the overall message of the paragraph.

I think that you should seek to understand what others are trying to say rather than seek to find syntactic errors in their writings in order to boost your ego. Try it for some time - you might learn that you enjoy it.

What were you discussing?

The idea and consequences of using a computationally complete, compile-time metaprogramming system, or a syntactic abstraction facility, such as macro system, but not limited to macro system, provided by a language, for extending the language. Considering the original post (by Peter Van Roy), I thought that I had been clear enough about this, but I guess I wasn't. I guess the main reason why you have to ask such a question is that you are not here to understand what I'm saying, but instead you are trying to find syntactic flaws in what I say.

OK, describe your metaprogramming facility and how it integrates extremely well.

I've never claimed I would have designed such a facility already.

What I am interested in, and what I feel is the crux of the problem is: given an arbitrary (i.e., user-defined) datatype of terms, an arbitrary datatype of types, how does the user specify the typing relation and notion of substitution on terms, and, given those four things, how does your metaprogramming facility guarantee that substitution on terms preserves typing?

One idea would be to give the typing relation in an executable logic language.

I was only refuting to your claim that a language extension implemented as an external tool necessarily duplicates code use in the compiler.

I don't recall saying that the duplication would be limited to code in the compiler.

[...] And I've argued that macros do not provide any advantage over external tools in this respect. [...]

[...] I feel that few people perceive this fact; when they see an extension implemented as a macro, they ascribe too much value to the macro mechanism which, in fact, plays hardly any significant role. [...]

I disagree. Macros make it much more convenient to express many kinds of language extentions. We can measure convience in the number of tokens required to implement the extension and any desired measure of effort it takes from some third party to [re]use several such extensions.

by `products' I mean tupling and projection, by `sums' I mean injection and case, and by `recursive datatypes' I mean construction and fold.

All of the above are definable even in the pure lambda calculus. I think that a problem here is that we haven't agreed on the core language in any way. Also, I think that an essential problem here is also that we haven't agreed on what constitutes "language extension". The problem is that everything is possible in a Turing-complete language, because, in the end, you can just implement an interpreter for the desired source language.

So, for example, call/cc is not macro-definable in (Scheme minus call/cc): first-class continuations have be added as a primitive.

If tail call optimization is provided by the core language, then transforming the program into CPS, it is quite easy to do the above. I should also point out that it isn't always necessary to transform the entire program. If only a part of a program relies on call/cc, then it may not be necessary to transform the entire program (to CPS).

Variants (sums) are macro-definable in Scheme because the language already effectively supports sums because it has an if-then-else construct.

I think that you are being somewhat simplistic here. If a language has if-then-else and while, then the language is Turing-complete and, at the worst, anything can be expressed by writing an interpreter. Again, I think that a major problem here is that we don't have a shared definition of expressibility or definability. Certainly, as you should be able to understand, I have never claimed that everything can be expressed modularly or locally using macros.

I have often observed that many ignorant people believe everyone to be as ignorant as themselves...

Yes, I have met many such people.

Vesa: "I would also add lambda-expressions, or anonymous procedures, to the above [higher-order functions]"

Frank: "You're joking, right? I have the most incredible sinking feeling..."

hmm... I'm not sure what you intend to imply here (aside from the feeling of frustration). Of course, the above was meant mostly as an example of a "non-trivial" language feature implemented as a syntactic transformer. In general, it could, in some cases, make sense to provide lambda-expressions outside of the core language, because they can be provided outside of the core language. It should be clear that certain language features can be expressed in terms of each other. The choice of core language features should seek to minimize the amount of effort it takes to implement the core language and the amount of effort it takes to implement the desired source language on top of the core language.

Your vision of metaprogramming via macros is the best thing since sliced bread and you have conclusively demonstrated its value.

You still haven't learned that I never claimed that macros are the way to do metaprogramming. Let me repeat myself, I think that you should seek to understand what others are trying to say rather than seek to find syntactic errors in their writings in order to boost your ego.

...

you cannot define a macro which performs static typechecking locally, since the type of a term depends on the types of free variables in its environment

But you can define a macro that type checks code locally with respect to a given environment that maps free variables to types.

Marc Hamann - Re: Macros vs. Higher-order Programming  blueArrow
11/5/2003; 6:05:24 AM (reads: 671, responses: 1)
I think a few things need to be said.

I think that you should seek to understand what others are trying to say rather than seek to find syntactic errors in their writings in order to boost your ego.

I find it hard to imagine that anyone here posts to boost their ego. Posting in a public forum is almost guaranteed to have the opposite effect, so it seems there would be easier venues elsewhere.

As for trying to understand others, given the limitations of this medium, the only way to do so is to scrutinize and question what they say. It is very difficult to assess the tone, intent and logical content of many posts otherwise.

ignorant

I find it hard to imagine a context in which this word in its original meaning (lacking knowledge) had any application to describe someone who voluntarily spends time posting on LtU. Let's face it, PLT is considered geeky among the geeks, and you pretty much have to know quite a lot to even WANT to be here.

I may disagree with the views of others here, but I'm pretty darn sure that they hold their opinions through knowledge and experience, even though no one has a monopoly on ALL knowledge and experience.

Frank is a hard arguer (I've taken my beatings too ;-)), but I have no doubt that his opinions and arguments are based on a thorough background knowledge and a passionate view of what good PLT looks like.

Ehud Lamm - Re: Macros vs. Higher-order Programming  blueArrow
11/5/2003; 6:20:01 AM (reads: 679, responses: 0)
Posting in a public forum is almost guaranteed to have the opposite effect

Right. That's why I don't post more often...

Vesa Karvonen - Re: Macros vs. Higher-order Programming  blueArrow
11/5/2003; 8:09:26 AM (reads: 658, responses: 1)
One of the main problems with many people in public forums like this, is that they are quick to make assumptions, but slow to check the validity of their assumptions. As I have argued before, that is pretty much the opposite of an analytical, scientific mind.

Marc Hamann - Re: Macros vs. Higher-order Programming  blueArrow
11/5/2003; 11:20:19 AM (reads: 647, responses: 0)
One of the main problems with many people in public forums like this, is that they are quick to make assumptions, but slow to check the validity of their assumptions. As I have argued before, that is pretty much the opposite of an analytical, scientific mind.

How can one check the validity of one's assumptions in a forum such as this?

The only way I know of is to employ a sharp line of questioning to determine what the other person really means. Hopefully, this will stimulate the other person to clarify their position.

I should also point out that making assumptions about the intent, knowledge and assumptions of the interlocutor is also making untested assumptions.

In fact, devining the internal states of people through external data is pretty much untestable in principle using the accepted scientific method... ;-) (This is the philosophical nub of the Turing Test, for example)

Vesa Karvonen - Re: Macros vs. Higher-order Programming  blueArrow
11/5/2003; 11:44:45 AM (reads: 634, responses: 1)
How can one check the validity of one's assumptions in a forum such as this?

Simple. Ask first. Read the answer carefully. Ask again if necessary. Formulate your critical response only after you know that your assumptions are correct.

You should also spell out your assumptions explicitly. Don't just say: "What you are saying doesn't make sense." You should instead say it like this: "If you are saying that [your assumptions in your own words], then what you are saying doesn't make sense, because [your explanation of why it doesn't make sense]."

What seems to happen is this. Person A writes something. Person B quickly reads A's post. Person B makes certain assumptions and synthesizes an extremely aggressive critical response, that doesn't state the assumptions. Person A argues that some of the assumptions that B seems to have made are false. Person B synthesizes an even more aggressive critical response, that actually doesn't even recognize the false assumptions pointed out by A.

Ehud Lamm - Re: Macros vs. Higher-order Programming  blueArrow
11/5/2003; 12:20:18 PM (reads: 632, responses: 0)
after you know that your assumptions are correct.

I for example was under the illusion the LtU is suposed to be about programming languages, not discussion group ethics...

Rici Lake - Re: Macros vs. Higher-order Programming  blueArrow
11/5/2003; 12:43:34 PM (reads: 630, responses: 0)
I for example was under the illusion the LtU is suposed to be about programming languages, not discussion group ethics...

Programming languages rather than programmers' language, you mean?

But, for future reference:

You should also spell out your assumptions explicitly. Don't just say: "What you are saying doesn't make sense." You should instead say it like this: "If you are saying that [your assumptions in your own words], then what you are saying doesn't make sense, because [your explanation of why it doesn't make sense]."

According to Anatol Rappaport, a good style of argument includes statements like this:

"If when you say [term] you mean [alternative definition of term], then I agree with you."

After which, you can reduce the argument to a definitional one, if necessary. In algorithmic terms, I think it is "divide-and-conquer": rather than throwing out the entire partial solution, you divide it into a successful part and a part that requires more investigation.

Of course, both of these approaches are forms of macro-substitution.

Vesa Karvonen - Re: Macros vs. Higher-order Programming  blueArrow
11/5/2003; 1:24:36 PM (reads: 638, responses: 1)
I, for example, was under the illusion thethat LtU is supposed to be about programming languages, not discussion group ethics...

Ok. I get the message. I won't waste more time posting here.

Ehud Lamm - Re: Macros vs. Higher-order Programming  blueArrow
11/5/2003; 2:33:22 PM (reads: 614, responses: 0)
I was merely trying to stir the discussion into a more frutiful track, for example back to the concrete examples you wrote about ealrier, which I found quite interesting.

But, of course, no one is forced to post here.