Joe MarshallUsual Integrations, Function Calls and Primitive Calls

· 17 hours ago

Near the beginning of most MIT-Scheme files you'll see (declare (usual-integrations)). This instructs the SF program, which translates Scheme to S-code, to replace the “usual” global variables with their (compile time) values. You lose the ability to change the values of these variables, but who redefines CAR or CDR? (And if you do, just remove the declare form or add CAR and CDR to the exception list.)

Now forms such as (null? x), which used to be syntaxed into this S-code:

[funcall
   [global ’null?]
   [argument 0]]

are now syntaxed into this S-code:

[funcall
   [quote [primitive null?]]
   [argument 0]]

Recall that S-code is a tree representation of Lisp code that is walked by the interpreter. The leaves of the tree are all either quote, global, argument, or lexical, and each of these only take one or two machine instructions to eval. The “overhead” of interpretation involves getting to the leaves.

The internal nodes of an S-code tree are the compound forms: function calls and special forms. IF and PROGN (in Scheme BEGIN) are the two most frequently evaluated special forms. When we construct the S-code for a compound form, we pattern match against the subforms and specialize the S-code. Specialized S-code for a compound form is called a “superoperator”. Superoperators eliminate the eval dispatch for the subforms. The machine code for interpreting a superoperator is close to the equivalent compiled code. Let us examine some of the most important superoperators.

Funcall Argument List Elimination

In general, a function call accumulates a list of arguments and jumps to APPLY to dispatch the function. We can eliminate the overhead of doing this. First, we can eliminate the call to APPLY by having applicable objects implement the applicable interface. We use virtual function dispatch to invoke the code that implements the applicable object. Applicable objects can be applied to any number of arguments passed as a list. We can eliminate the overhead of manipulating short argument lists by spreading the arguments and adding call0, call1, call2, and call3 methods to the applicable interface. The number of arguments at any particular call site is fixed, and it is usually a small number. We eliminate the overhead of accumulating the argument list for a function call by specializing the funcall S-code object on the number of arguments.

An S-code funcall is replaced by a funcall0, funcall1, funcall2, funcall3, or funcalln depending on the number of arguments. The argument accumulation loop is unrolled in the numbered cases placing the arguments in local C# variables. The funcallx superoperators directly call the appropriate calln method in the function’s applicable interface.

Primitive Function Call

Function calls to primitive procedures are the foundation of Lisp and Scheme programs. We can look at a Lisp program as if it were a “black box” that makes a series of primitive function calls. If unoptimized, a Lisp program ought to make the same series of primitive function calls with the same values whether it is interpreted or compiled. The primitive function will run at the same speed no matter how it is called, so if compiled code is faster than interpreted, it is issuing the primitive function calls faster, taking less time between them. The time between issuing primitive function calls is the interpretation overhead, and we want to reduce that.

When we construct an S-code funcall (and funcall0, funcall1, etc.), we check if the thing being called is a literal quoted primitive function, and if it is, we replace the S-code with a primitive-funcall (or primitive-funcall0, primitive-funcall1, etc.). A primitive-funcall evaluates its arguments, but it can skip evaluating the function. It can skip the apply dispatch, too, and just jump right to the primitive entry point.

We construct (null? arg) as

[primitive-funcall1
  [quote [primitive null?]]
  [argument 0]]

The MIT-Scheme interpreter and the MIT-Scheme hardware all special case function calls with small numbers of arguments and those which call primitive procedures.

In the next post, we develop this idea beyond what MIT-Scheme already does.

Joe MarshallEliminating Deep Search

· 20 hours ago

In general, when you look up a lexical variable, you have to do a deep search. You start with the innermost scope and work your way out until you find the variable you are looking for. This takes an absurd amount of time for each variable lookup, so we want to avoid actually doing it.

As mentioned in the previous post, we special case the two most common cases: when the variable is in the innermost scope and when the variable is in the global scope. In the other cases, when the variable is in an intermediate scope, we can flatten the scopes by copying the variable values from inner scopes to outer scopes whenever the variable is closed over. This only works because we have put the mutable variables in cells and we copy pointers to the cells.

When an S-code lambda object is created, we walk the code and find the variables that are closed over. We create a closure with a copy of the variable’s value (or a copy of its cell). In essence, we replace:

(lambda (x) (+ x y))

(assuming y is a lexical variable) with

(%make-closure ’(lambda (x) (+ x y)) y)

Once we have done this, we no longer need to deep search for lexical variables. There is only one lexical frame, it contains a copy of the variable, and we know the offset of the variable in the frame. Lexical variable lookup is now just a vector-ref.

The drawback is that we have cell-converted the mutable variables, so calling a function that contains mutable variables will allocate memory. Use of SET! makes your code expensive. Furthermore, %make-closure is linear in the number of things being closed over. But typically, closures only close over one or two items.

So between these specializations (this post and the two previous ones), we have removed the overhead of variable lookup. An expression like (lambda (x) (+ y x)) (where y is a lexical variable) is converted into S-code that looks like:

[%make-closure
  [quote (lambda ’(x)
            [funcall
              [global ’+]
              [lexical 0]
              [argument 0]])]
  [argument n]  ;; wherever ’y is in the outer scope
  ]

Evaluating variables is the bulk of the work in a Lisp program. Now both the interpreter and compiler can evaluate a variable with one or two machine instructions. Interpreted code is still slower than compiled code, but this isn’t a bottleneck anymore.

Now we look for other bottlenecks. Watch this space.

Joe MarshallSpecialized Variables

· 42 hours ago

In general, when you lookup a variable, you have to do a deep search through the lexical contours. This is because someone could have captured a first class environment and evaled a define in it that shadows a variable.

However, this is a very rare case. In the vast majority of cases, the variable is either a global variable or lexical variable bound in the immediately enclosing environment. We can optimize for these cases.

Symbols refer to three different kinds of variable: if the variable is free, it refers to a global or “top-level” variable, if it is bound, it may be bound in the immediately enclosing environment (i.e., it is an argument), or it may be bound in a environment further up the lexical chain. We can determine which of these cases applies statically when we construct the S-code and select the appropriate subtype of S-code variable.

A global or top-level variable can contain a pointer to the value cell of the variable in the global environment, so variable lookup simply involves a dereference. If the variable is lexically bound, and the environment is a StatcEnvironment (or subtype), then the lexical address of the variable is known statically and cannot change (the variable cannot be shadowed), so we can store the lexical address in the S-code variable. Lookup is faster because it involves constant offsets.

The variable corresponding to argument 0 of the current lambda is by far the most commonly accessed variable, followed by argument 1. We can create special S-code types for each of these two variables in j addition to the three types for global, lexical, and the other arguments. By classifying the variables into one of these five types at syntax time, we can greatly reduce the amount of work needed at runtime to access the variable.

So consider the form (lambda (n) (lambda (x) (+ x n))). n is a lexical variable bound one level deep. x is an argument and + is a global. The S-code representation of this form will have three different variable types. The S-code for n will be a LexicalVariable with a depth of 1 and offset of 0. The eval method for LexicalVariable can walk back one level in the lexical chain and then access the zeroth element of the environment.

On the other hand, the S-code for x will be an ArgumentVariable with an argument number of 0. The eval method for ArgumentVariable can call the GetArg0 method of the environment.

Finally, the S-code for + will be a GlobalVariable with a pointer to the global value cell for +. The eval method for this type of S-code variable can simply dereference the pointer.

Between specializing environments and S-code varibles, we can greatly reduce the amount of time needed to access variables.

Joe MarshallSpecialized Environments

· 43 hours ago

A Lisp program conceptually does a lookup every time it evaluates a symbol. Since evaluating symbols is the single most common operation, you want it to be as fast as possible. In compiled code, symbol values might be stored in registers, on the stack, or in a vector, but in interpreted code they are usually stored in a structure called an environment. In compiled code, the symbol value can often be retrieved by a move instruction, but in interpreted code there may be a function call and a vector-ref needed to get the value. This is a significant performance hit.

We can customize the layout of an environment to speed up symbol lookup. In the general case, an environment is a fairly heavyweight data structure. Because you can eval a define form at any time, you need an environment that supports incremental definition and you need to deep search each time you do a lookup because someone might have defined a shadowing variable.

But 99% of the time, you don’t need the general case. If no one captures a first-class environment, then you don’t need to support incremental definitions because no one can call eval to define a new variable. If no one mutates a variable, then you can store a copy of the variable’s value directly in the environment instead of indirectly through a ValueCell object. Most environments only close over a couple of variables, so you can make the variables fields of the environment object instead of storing them in a vector of values. You statically know the number of variables, so you can store them in a fixed-size class instance.

An Environment is an abstract base class.

A GlobalEnvironment is a singleton class that represents bindings in a hash table.

A StandardEnvironment is a concrete subclass of Environment that supports assignment and incremental definitions. The bindings are kept in a vector of ValueCell objects and the incremental bindings are kept in a hash table. This is a general purpose environment.

class StandardEnvironment : public Environment {
  Vector<ValueCell*> bindings;
  HashTable<ValueCell*> incrementalBindings;
};

A variable lookup in a StandardEnvironment involves fetching the vector of value bindings from the environment, looking up the variable in the vector of bindings, and if it is not found, looking it up in the hash table of incremental bindings, and finally fetching the value from the cell. Several levels of indirection are involved.

A StaticEnvironment supports assignment, but not incremental definitions. Bindings are kept in a vector of ValueCell objects.

class StaticEnvironment : public Environment {
  Vector<ValueCell*> bindings;

  Object* GetArg0() { return bindings[0].Value; }

};

Looking up a value in a StaticEnvironment is slightly quicker because there is no table of incremental bindings to search.

An ImmutableEnvironment holds bindings in a vector and does not support assignment or incremental definitions. Binding values are kept directly instead of indirectly through ValueCell objects.

class ImmutableEnvironment : public Environment {
  Vector<Object*> bindings;

  Object* GetArg0() { return bindings[0]; }

};

Looking up a variable in an ImmutableEnvironment is quicker because there are no ValueCell objects to dereference.

A SimpleEnvironment is an abstract subclass of ImmutableEnvironment that does not support assignment or incremental definition. Bindings are kept in class fields.

A SimpleEnvironment0 is a concrete subclass of SimpleEnvironment that holds no bindings — these are created when you invoke a thunk. A SimpleEnvironment1 is a concrete subclass of SimpleEnvironment that holds one binding, and so on up to SimpleEnvironment3.

  class SimpleEnvironment2 : public SimpleEnvironment {
    Object* var0;
    Object* var1;

  Object* GetArg0() { return var0; }

  };

Looking up a variable in an SimpleEnvironment is quicker because you don’t have to indirect through a vector of bindings. A method as simple as GetArg0(), which simply returns a field in the instance, can often be inlined.

Environments are created by applying closures. There are subtypes of closures that correspond to the different kinds of environments. For example, a SimpleClosure2 is a closure that creates a SimpleEnvironment2.

Closures are created by evaluating lambda expressions. There are subtypes of lambda expressions that correspond to the different kinds of closures. For example, a SimpleLambda3, when evaluated, will create a SimpleClosure3.

When S-code lambda object are created, they are analyzed to see if a first-class environment is captured, and if not, whether any of the variables are mutated. The appropriate subclass of lambda object is created based on this analysis.

Almost all environments end up being SimpleEnvironments and are quite a bit faster than the general case.

Neil MunroNingle Tutorial 2: Templates

· 44 hours ago

Contents

Introduction

Following on from tutorial-1, we will begin extending our application, which will be the cliche "microblog" tutorial which is common these days. This time we will be focusing on how to render templates, a typical thing to do in web applications, it separates application logic from presentation.

We will be using djula as our templating system, it is analogous to jinja or django templates in function, if you are familiar with those you should feel right at home.

In Ningle nomenclature, a route is a url, and the function that is run when a client connects to the route is known as a controller, I will be following this terminology in my tutorial. This distinction was not important in the previous tutorial, but I believe it is worthwhile learning the correct terms for things and using them, in Computer Science as in magic and demonology, things are much easier to reason about when you know the true names of things.

A Word About Application Design

One questions I was asked many time when I was teaching was "how do you know what to do next?", students were overwhelmed with all the work that had to be done, they weren't sure where to begin or if they had made a start weren't sure what the next step was.

While I was working on the UK National Archive search engine, we were given 5 days to implement a new system from scratch, it was me (a back end developer) and a front end developer, we spent the first 4 hours of our first day discussing and deciding how we were going to work. We agreed to start on a very simple base, a simple backend that had no database connections and returned hard coded data that we agreed the front end could work with, and the most basic of html forms, with no css, just the ability to GET/POST data to/from the routes.

This enabled my team mate to access a route and get realistic looking data (even though it was fake) and I was able to use a basic html form to test the backend. Once this basic shared scafford was in place we both began working on our actual tasks. He would begin styling the form, and I would one by one connect the routes to the database and return real data.

This is the approach we will be taking with this tutorial, to have a microblog application we will need a database, an authentication system, static files etc... We would very quickly get into the weeds before writing anything anywhere close to rewarding. We will be hard coding data in controllers initially, and we will later replace it with real data and real functionality once we have gotten to that point.

In short, it's ok to not know, necessarily when comes next, you can build things in small pieces, you can use stubs, scaffolding etc and it's fine to figure out the next piece as you go, I sometimes don't necessarily know what to build next, or indeed how to build it, but once I have something, it helps me because I can begin to see how the application comes together.

Adding Djula

In our previous tutorial we had to add clack and ningle to our asd file, in the :depends-on() section, after installing them with quicklisp, we repeat this process, by first installing djula with quicklisp.

(ql:quickload :djula)

As we did previously, we must edit the ningle-tutorial-project.asd file. Find the :depends-on() section which will look like this:

:depends-on (:ningle
             :clack)

And we must add :djula to the end of the list, such that it looks like this:

:depends-on (:ningle
             :clack
             :djula)

Using Djula

Basic Concepts

Djula is a template system, if you are unfamiliar with what these are, in the context of web development, it allows you to create html pages that differ in only small ways, and inject the data that differs. In our microblog example, every user will have a timeline of sorts, and every timeline will be different, but it will share exactly the same structure, so the same template can be reused quite easily.

In short, a template is a file that contains some pre-written content that may have data injected into that content to modify it.

Templates also allow you to extend other templates (which we will see in this tutorial), so if you wish to have a clear visual style shared across all templates you can create a base template that all others extend (which is exactly what we will be doing). In addition to passing data into templates, templates have some basic logic like branching (if) and loops (for), as well as filters for formatting or transforming data in your presentation logic.

It is worth exploring the key concepts of djula and how to use them, even if we don't in fact end up using them in our final project, while this wont serve as a comprehensive djula tutorial, it will cover more than the basics we will initially use.

Integrating Djula

Before we get started working with djula directly, we must ensure it knows how to find the templates by adding a template directory to the list of locations djula knows to search inside. There's no particular place specifically to do this, so I have elected to set this within the start function.

To remind ourselves, our original start function looked like this:

(defun start (&key (server :woo) (address "127.0.0.1") (port 8000))
    (clack:clackup
     *app*
     :server server
     :address address
     :port port))

We must add a single line into this to instruct djula to add our template directory.

(defun start (&key (server :woo) (address "127.0.0.1") (port 8000))
    ;; The line below sets up a template directory that djula can access.
    (djula:add-template-directory (asdf:system-relative-pathname :ningle-tutorial-project "src/templates/"))
    (clack:clackup
     *app*
     :server server
     :address address
     :port port))

Setting Up Templates

Of course, we must make sure that this directory exists and that a template is in the templates directory, prior to running our application, so (once again assuming your quicklisp is installed at ~):

mkdir -p ~/quicklisp/local-projects/ningle-tutorial-project/src/templates
touch ~/quicklisp/local-projects/ningle-tutorial-project/src/templates/index.html

Now that the basics are in place, we can begin to look at some basics of djula.

An Example Without Djula

I'd like to start with what our code might look like if we weren't using djula, so that its use is quite apparent, we shall return to our index controller.

(setf (ningle:route *app* "/")
      (lambda (params)
        "Hello World"))

It simply returns a string and the web browser renders that string, it isn't html and so if we want to send html to the web browser, we must send html.

(setf (ningle:route *app* "/")
      (lambda (params)
        (let ((user (list :username "NMunro")))
            (format nil "
                <!doctype html>
                <html>
                    <head>
                        <title>~A - Y</title>
                    </head>
                    <body>
                        <h1>Hello, ~A</h1>
                    </body>
                </html>" "Home" (getf user :username)))))

This isn't a lot of html and that's even before we consider what to do if we want to render a different title or h1. We could of course use format to dynamically inject values into the string, but would become increasingly more complicated if we needed to repeat something, or only render parts of the html on a conditional basis, for example, what to show if the user object was nil?

The html would grow quite a bit, and if you had shared styles and structure you'd constantly be editing it everywhere you used it for even the smallest of changes. You could of course try to store the base html in a global variable but as pages diverged with different needs this would just become even more problematic.

I trust you see the scope of the problem and the nature of the solution!

An Example With Djula

So let's look at how we might make this easier with djula.

We first need to edit src/templates/index.html:

<!doctype html>
<html>
    <head>
        <title>{{ title }} - Y</title>
    </head>
    <body>
        <h1>Hello, {{ user.username }}</h1>
    </body>
</html>

This almost looks like normal html, but for one unusual difference, the {{ title }} and {{ user }}. These are ways to inject a value from a controller into a template. In fact there are two template specific differences we are going to look at, values are wrapped in {{ }} which we will learn about in just a moment, and {% %} which are used for running an expression (branching, looping, etc), in effect {{ }} is for displaying something {% %} is for doing something.

In our example here we are only going to display the values of two variables, title, and name. To do this, we must edit our controller function:

(setf (ningle:route *app* "/")
      (lambda (params)
        (djula:render-template* "index.html" nil :title "Home" :user (list :username "NMunro"))))

We replace the string "Hello World" with the call to djula:render-template*. It takes the name of the template to render (in this case our "index.html" file), it also takes an optional stream, which since we are not needing to redirect to any other streams we set this to nil and any optional key word arguments that will be the named variables passed into the template.

You can experiment with changing the values passed in (remembering to recompile your application) to see that the template is changed.

I do hope you agree that this looks much more manageable, it certainly cuts down on the amount of html you need to write and maintain, but I mentioned that we would see how expressions can be used. In this example, if we didn't pass through a title, or user, nothing would show in the template. We might make a mistake, or perhaps load some data from a database and there simply isn't a valid value returned and we would want to display something rather than nothing. That's when these expressions come in!

Conditional Rendering (branching)

Let's begin first by editing our index.html file.

<!doctype html>
<html>
    <head>
        {% if title %}
            <title>{{ title }} - Y</title>
        {% else %}
            <title>Welcome to Y</title>
        {% endif %}
    </head>
    <body>
        <h1>Hello, {{ user.username }}</h1>
    </body>
</html>

We have made a slight change in the template that will render the title passed into the template if it is given, however if for whatever reason it is omitted, is nil, or "" then a default title is rendered instead.

If you wish to experiment by removing the :title argument in the djula:render-template* function call, and see how this affects the way the template is rendered.

Repeated Rendering (loops)

Our microblog application must, sooner or later, begin to render posts by other users. It is trivial enough to pass posts from our controller to our template, it's not any more difficult than what we have done previously, it's just more code.

(setf (ningle:route *app* "/")
      (lambda (params)
        (let ((user  (list :username "NMunro"))
              (posts (list (list :author (list :username "Bob")  :content "Experimenting with Dylan")
                           (list :author (list :username "Jane") :content "Wrote in my diary today"))))
          (djula:render-template* "index.html" nil :title "Home"
                                                   :user user
                                                   :posts posts))))

The controller has been modified to mock our microblog data, in this case something representing a logged-in user, and a list of posts that might be representitive of a timeline feature, however, now that we have a list of posts, we shall have to use a loop in our template to repeatedly print each post.

As discussed in the previous section, we must use the {% %} expression braces to do things in our templates and a loop is indeed doing something, it's doing something repeatedly! So, we return to editing our index.html file and modify it to look like the following:

<!doctype html>
<html>
    <head>
        {% if title %}
            <title>{{ title }} - Y</title>
        {% else %}
            <title>Welcome to Y</title>
        {% endif %}
    </head>
    <body>
        <h1>Hello, {{ user.username }}</h1>
        {% for post in posts %}
            <div>
                <p>{{ post.author.username }} says: <b>{{ post.content }}</b></p>
            </div>
        {% endfor %}
    </body>
</html>

The change is introduced under the <h1> tag, we use the {% %} to run a for loop, well, technically a foreach loop, that iterates of the variable posts that is passed in from the controller. As it loops it will create a new <div> that contains information about the post, including its author and the post content.

Our basic microblog app is beginning to take shape, but there's one final aspect of djula that we must look into to assist us in overall template management.

Template Inheritance

The best practice for working with templates is to create inheritance so that you can build bases upon which to create more specialized templates without worrying about repeated, shared, html. So, we shall restructure our application to create a base.html upon which our index.html will extend.

<!doctype html>
<html>
    <head>
        {% if title %}
            <title>{{ title }} - Y</title>
        {% else %}
            <title>Welcome to Y</title>
        {% endif %}
    </head>
    <body>
        <div>Y: <a href="/">Home</a></div>
        <hr>
        {% block content %}
        {% endblock %}
    </body>
</html>

This is our base.html file, much of it will appear to be familiar, in fact it looks almost like our index.html file, however where the posts were, a new expression block is introduced the block, blocks are given a name, in this case content, but in reality can be called whatever you want. You might have {% block nav %}, {% block header %}, {% block footer %}, whatever you want really, but blocks are closed with {% endblock %}.

In a base template such as our titlular base.html these blocks are often empty, but as you can see below in our rewritten index.html we override the content of these blocks.

{% extends "base.html" %}

{% block content %}
    <h1>Hi, {{ user.username }}!</h1>
    {% for post in posts %}
      <div>
        <p>{{ post.author.username }} says: <b>{{ post.content }}</b></p>
      </div>
    {% endfor %}
{% endblock %}

The first thing we do is extend the base.html file, having done that we inherit everything from the base and do not need to rewrite all the common shared html that we wrote in the base.html file. As mentioned above we open the content block (in the same manner as it was defined in the base.html) as fill in only the html we need to display.

Conclusion

To recap, after working your way though this tutorial you should be able to:

  • Describe, discuss, and explain what a template is.
  • Explain why templates are beneficial.
  • Create a base template that is extended by another template.
  • Construct a route and controller that renders a template.

I hope that you have found this helpful, it's not a comprehensive introduction to djula by any stretch of the imagination and we will be revisiting it later on in this tutorial series, we need to look into filters at a later date (when our application needs them). In the meantime I would suggest that you read the djula documentation for further information.

In the next tutorial we will be looking at how to render static files (images, css, js, etc) using lack middleware.

Github

The link for this tutorial is available here.

Resources

vindarelTowards a Django-like database admin dashboard for Common Lisp

· 2 days ago

This is an ongoing work I wanted to share sometimes this year. It isn’t ready, it isn’t released, but if you are a motivated lisper I can add you to the GitHub repository and you can try the demo.

We all want more tools for easier and faster web development in CL I guess right? An automatic database admin dashboard is an important component for me, both for personal use and development purposes, but also for client-facing apps, at least at the beginning of a project.

What I started is based on the Mito ORM. You define your DB tables as usual, and then this package comes into play. Let’s call it mito-admin.

What’s more or less working so far is:

  • choose which tables to display (let’s say we have books, that can be in one shelf, and have many tags)
  • you get a welcome page. The tables are listed on the left side of the admin.
  • click on a table and see a list of records (with pagination, which module is already published as cosmo-pagination. See also lisp-pagination)
  • CRUD actions on records:
    • create
    • update
    • delete
  • with CSRF protection
  • with form validation
  • with automatic handling of relations and custom HTML widgets
    • a one-to-many gives you a select field
    • (there’s a lot to do here)
  • a search bar
  • built-in login, user auth and rights
  • all this being customizable with CLOS fields and methods
  • light and dark themes thanks to Bulma CSS
  • for SQLite, Postgres, MySQL.

Table of Contents

There’s a demo in the project. Here’s how it works.

Models

Here are 3 Mito models: a book, a shelf, tags.

This is regular Mito and regular classes definitions (note the :metaclass option).

We are excluding the print-object methods for brevity.

(in-package :mito-admin-demo)

(defparameter *db-name* "db.db"
  "SQLite DB name.")

(defvar *db* nil
  "DB connection object.")

(defclass shelf ()
  ((name
    :initarg :shelf
    :accessor name
    :initform ""
    :col-type (or :null (:varchar 128))))
  (:metaclass mito:dao-table-class)
  (:documentation "Shelf: where is the book located.")
  (:unique-keys name))

(defclass tag ()
  ((name
    :initarg :name
    :accessor name
    :initform nil
    :col-type (or :null (:varchar 128))))
  (:metaclass mito:dao-table-class)
  (:documentation "A book can have many tags (categories).")
  (:unique-keys name))

;; necessary intermediate table for m2m.
(defclass book-tags ()
  ((book :references book)
   (tag :col-type tag))
  (:metaclass mito:dao-table-class))


(defclass book ()
  ((title
    :accessor title
    :initarg :title
    :initform nil
    :type string
    :col-type (:varchar 128))

   (title-ascii
     :accessor title-ascii
     :initform nil
     :col-type (:varchar 128))
     :documentation "Same title, only ascii. Processed after the book creation. Used for search and URI slugs.")

   (shelf
    :accessor shelf
    :initform nil
    :col-type (or :null shelf)
    :documentation "A card has only one shelf.")

   (tags
    ;; This column is even optional, we can do everything with the intermediate table
    ;; and a method that collects the tag for a book.
    ;; In fact, this slot won't be populated by Mito, we'll see "slot unbound".
    :accessor tags
    :initform nil
    :col-type (or :null book-tags)
    :documentation "A book can have many tags, aka categories.")

   (cover-url
    :accessor cover-url
    :initarg :cover-url
    :initform nil
    :type (or string null)
    :col-type (or (:varchar 1024) :null))

   (review
    :accessor review
    :initarg :review
    :initform nil
    :type (or string null)
    :col-type (or :text :null)
    :documentation "Let's write reviews about our favourite books."))

  (:metaclass mito:dao-table-class)
  (:documentation   "Book class, simplified. After modification of the DB schema, run (migrate-all)."))

To create the database, you’ll need a couple more Mito invocations. See its README or the Cookbook.

Select tables to show

We need to select the tables we’ll make available in the admin.

Override the mito-admin::tables method:

(defmethod mito-admin::tables ()
  '(
    book
    shelf
    tag
    ))

I deleted some inline comments, but there’s something to keep in mind. We need a list of all classes for Mito, so that it runs the migrations, and another one of a subset of classes for the admin. There’s only one list for now but that’s easy to fix.

Start the admin

Call mito-admin:connect, and bootstrap users and their base roles:

    ;; users and roles:
    (mito-admin-auth/v1::ensure-users-and-roles)

    ;; base roles.
    (mito-admin-auth/v1::bootstrap-base-roles)

Configure the admin

The first thing to do is to register an app name:

(mito-admin::register-app :cosmo-admin-demo)

which is here our lisp’s package. Working in a Lisp image allows to work on multiple (web) apps at the same time (at least with Hunchentoot). So, in a web library you’ll need a layer of indirection if you want to support this use case. I don’t for now, but registering the app’s *package* is necessary internally to resolve table names to fully-qualified symbols. As a user, you shouldn’t need to know all that, but as a tester you might.

You can override the render-slot method to change the default representation of a record’s slot. Here we turn a book’s cover-url from a string to an HTML anchor. This can be automated and abstracted. Another TODO.

(defmethod mito-admin::render-slot ((obj book) (slot (eql 'cover-url)))
  "A book cover URL must be a <a> tag with an href."
  (if (str:non-blank-string-p (mito-admin::slot-value? obj slot))
      (format nil "<a href=\"~a\"> ~a </a>" val val)
      ""))

Form handling

This is the most important, and unfinished, part of the admin.

We could maybe use cl-forms but I didn’t find that it maps well to this admin’s ABI. You might find it useful though, as it’s feature complete. It even has: client-side validation, sub-forms, Spinneret and Djula-based renderers, etc. Look at its demo.

We need to:

  • be able to include and exclude fields from the HTML forms
  • be able to choose a different HTML widget for a given field
  • validate forms
    • with custom validation logic
    • and nicely render errors
    • or create or update the records.

We currently need to create forms explicitely:

(mito-admin:define-forms '(shelf tag))

or also

;; We can define forms manually.
;; (we can override default slots, but we can also override them by redefining the accessor methods)
(defclass book-form (mito-admin::form)
  ())

A form class has several slots we may redefine and use later, such as a list of validators, or the fields to use for the search, fields to exclude, etc.

We can exclude fields:

(defmethod mito-admin::exclude-fields (book-form)
  "Return a list of field names (symbols) to exclude from the creation form."
  '(title-ascii
    ;; TODO: handle relations
    ;; we need to exclude shelf, or we'll get an error on mito:insert-dao if the field is NIL.
    ;; shelf
    shelf-id
    tags-id
    ))

As you see, there are more TODOs here. title-ascii is a private field that we don’t need to expose. That’s fine. shelf-id and all are Mito’s references to the other tables. We need to recognize them and exclude them.

To define an HTML widget, override mito-admin::field-input. Our demo uses a built-in template to render a select field from a list of options.

TODOs: there’s a lot to do here. Our admin app still requires too much configuration, we want it to be more automatic. Recognize the input types better, ship an async select2 input for many-to-many relations.

Form validation

To validate a record, override the validators method. It returns a hash-table that, for all the table’s fields, associates a list of validators.

(defmethod mito-admin::validators ((obj (eql 'book)))
  "To validate a book:

  - its title should not be equal to \"test\".

  Beware that we override the method MITO-ADMIN::VALIDATORS."
  (serapeum:dict 'title (clavier:~= "test"
                                    "this title is too common, please change it!")))

We use the clavier library for this. Here’s a short blog post that shows a quick usage and our :allow-blank passthrough and validate-all function (which were not accepted upstream btw (shit it was nearly a year ago)).

Our form validation mechanism shows a global error message, and a specific one under each input field.

You didn’t write a single HTML line for this o/

One-to-many relations

Those are correctly handled by Mito and it’s easy to have a custom widget in the admin.

Many-to-many relations

See here: https://github.com/fukamachi/mito/discussions/161

At present, Mito doesn’t do much for m2m relations, but we can simply write a short method that will select all related objects of a record.

Here’s for a book’s tags:

(defmethod tags (obj)
  ;; null case
  nil)

(defmethod tags ((obj book))
  (let ((book-tags
            (mito:select-dao 'book-tags
                (sxql:where (:= :book-id
                                (mito:object-id obj))))))
    (when book-tags
     (mapcar #'tag book-tags))))

How to use it:

(tags (mito:find-dao 'book))
;; => (#<TAG tag1> #<TAG tag2>)

(tags (mito:find-dao 'book :id 2))
;; NIL

Simple enough, but that’s another TODO: automate the creation of such methods.

Closing words

This project has too many moving parts to my taste but I’ll get there.

I hate when people don’t release their useful library because “the code is meh” and yes, I’m doing that to you.

However I extracted parts from OpenBookStore (WIP) (notably the login, users auth and rights which was contributed by gnuxie), I published the little pagination module, contributed to a couple libraries and wrote blog posts. It’s already that for you.

You are at the very least helping as duck-brainstorming, so thanks. If you’d like to try the demo and look at horrible code, it should be doable.

I wonder under which license I’ll publish that.

There might be other ways for an admin panel and please try and cook us a plug-and-play solution. The proprietary Airtable with the new cl-airtable library or the open-source NocoDB (they give you a spreadsheet-like web UI + an API), or the lightweight Pocketbase, Mathesar or Supabase for Postgres (would you give this to your non-tech-savvy clients?), or NodeJS-based admin panels, etc, etc, etc. But a pure Common Lisp one? We’ll talk more about it in less than ten years, fingers crossed.


You can support E. Fukamachi for his work on Mito, cl-dbi, SxQL and all his other useful libraries.

Appendix: TODOs

View:

- [X] list of tables
- [-] list of records for each table
  - [X] see records
  - [X] pagination
  - [X] add "create" button
  - [ ] choose fields to display in search result
  - [ ] order records by field
- [X] a specific record
  - [X] view related column
  - [X] view some fields, ignore some fields

Search:

- [X] lax search on given fields
- [ ] more criteria

Create:

- [-] create a record
  - [X] ignore some fields
  - [X] choose related column
    - [X] select input for a to-1 relationship
    - [-] have a usable input widget for many-to-many relationships
      - [X] define a many-to-many in the demo: a book can have many tags.
      - [ ] define a default input widget (probably select2)
      - Mito doesn't help much with many-to-manys though.
  - [-] form handling
    - [X] form validation
      - [X] basics
    - [X] CSRF protection
      - in the create and edit forms.
  - [ ] form handling (cont)
    - [ ] subforms (create a new shelf in the card form)
    - [ ] client-side validation

Update:

- [X] update an existing record
  - [X] with same mechansim as create

Demo:

- [X] demo project with a couple tables and fields
  - [X] decouple the POC from openbookstore. Shit that wasn't that
    easy. <2024-03-21>
- [ ] display the app name instead of mito-admin (in title, header, footer).

Login:

- [-] admin user and rights
  - [X] base mechanism: users, roles, DB migrations (imported from
    OpenBookStore, again. Contributions by gnuxie). <2024-07-31>
  - [X] templates
  - [ ] add access rights to all the admin dashboard routes
  - [ ] add user logout dropdown in app

Actions:

- [ ] in the list of records, have actions: export to CSV, etc.


More:

- reference documentation website and tutorial
- unit tests
- include static assets (Bulma...) to work offline

i18n

- translations. See cl-gettext in OpenBookStore.

style:

- [X] dark mode (thanks Bulma 1.0)

Neil MunroNingle Tutorial 1: Hello World

· 2 days ago

Contents

Introduction

Welcome, you are about to begin your web development journey with Common Lisp and Ningle. Ningle is often the framework that comes up when searching for web development in Common Lisp, and having spent some time experimenting with it, I wanted to share my learning and ensure there was a tutorial that others could follow.

In this first chapter of what is likely to be an on-going saga, you will learn how to create basic web applications. It will be little more than simple routes but it will highlight the foundations upon which we will build.

There will be a github link for each chapter which you can download if you wish, however, nothing helps learn like typing things out yourself, trust me, the muscle memory is a useful thing to have.

Unless otherwise stated however, each chapter will build upon the last, growing the application as a whole, and you do not need to create new projects each time.

Prerequisites

It is assumed that you already have Common Lisp installed and have installed quicklisp, installing these is beyond the scope of this tutorial.

If you need a setup guide, you can consult my guide from 2020.

I will also be using my project setup nmunro-project, this is not in quicklisp, and as such you will want to git clone it into your quicklisp/local-projects/ directory (wherever that happens to be on your system) or download and extract the zip.

For the purposes of illustration I will assume you have quicklisp installed in ~ and your local-projects is at ~/quicklisp/local-projects. If this is different, please adjust the commands accordingly.

Once nmunro-project is installed into your local-projects, simply run:

(ql:quickload :nmunro-project)
(nmunro-project:make-project #p"~/quicklisp/local-projects/ningle-tutorial-project")

This will create a new project within which we can get started.

NOTE: If you want to use cl-project you are also welcome to use that, the only reason I had build my own project skeleton was that for a time cl-project was broken on my system and I never switched back.

Setting Up A Ningle Project

If you used my project builder you should have a directory listing like so:

├── README.md
├── ningle-tutorial-project.asd
├── src
│   └── main.lisp
└── tests
    └── main.lisp

The first thing that needs to be done, is to install ningle and mark is as a dependency of this project. To install ningle you can use quicklisp like so:

(ql:quickload :ningle)

This will also install clack, and lack, which we will be using quite a bit in later tutorials, at this point in time however it's a bit much to go into them in depth at this point, however, as with most other programming languages, there's multiple different web servers and as such an abstraction layer is needed to easily switch between them if required, this is what clack is analogous to. Lack is more the nuts and bolts of web development as we will see in time.

Now that ningle and its dependencies have been installed we must integrate it into our project, the ningle-tutorial-project.asd file is where we would do this.

Within this file is a line :depends-on (), you must edit this line to include ningle and clack:

:depends-on (:ningle
             :clack)

This is all that is required to setup the project with the required dependencies. Considering ningle, clack, and lack all work togehter you might expect ningle to directly depend on clack and lack, however to ensure things stay loosely coupled ningle was made somewhat agnostic to how it would connect to a web server and presumably some other connection framework could replace clack.

Building Your First Ningle Application

To get started on your first ningle powered application you will need to edit src/main.lisp:

Set Up Package

As with most modern Common Lisp development, it is customary to create a package in which to store our application code and switching into the package.

(defpackage ningle-tutorial-project
  (:use :cl)
  (:export #:start
           #:stop))

(in-package ningle-tutorial-project)

This is all pretty typical Common Lisp work and isn't ningle specific, the real work begins with this line:

(defvar *app* (make-instance 'ningle:app))

This will use the standard library function make-instance to create a new instance of the class app inside the ningle package, it is on this application object that we will build our app.

Defining A Route

Defining the routes our application will have is quite simple, Ningle has adopted sinatra style routing which is colloquial way to express web routes in application code, it comes from Ruby and has a popular following.

In the following code, we can see that we use Common Lisp setf function to set a function to run when a given route is accessed on the server. In this case the root route ("/"). You could define functions somewhere and reference them here or just use a lambda function for convenience. When this route is accessed, the string "Hello World" will be sent to the client.

(setf (ningle:route *app* "/")
  (lambda (params)
    "Hello world"))

We will be looking at routes in more detail in a future tutorial.

Handling 404

The error code we love to hate, but it's absolutely essential to have! Handling errors is very important in web development, the simplest and easiest error to handle is the 404! If we do not define this error handler, our server would never send a response to the client at all, which is even worse.

(defmethod ningle:not-found ((app ningle:<app>))
    (declare (ignore app))
    (setf (lack.response:response-status ningle:*response*) 404)
    "Not Found")

In this code we must write a method that overrides the default ningle:not-found method, and since the application object isn't actually used in this method, we ensure we declare that the app object is ignore-d. While I said lack wouldn't feature heavily at this point, we do see it used here, perhaps it's obvious to you, perhaps it isn't. Like in defining a route above we use setf to set the response-status of the lack response to be the value 404.

Lack is where request and response objects live and when we come to needing to work with them in greater depth we will, for now, however, just accept that this is how we correctly set the server response status-code to be the 404 we need.

As above, we return a string from this method "Not Found" will be sent as the response content to the client.

Starting The Server

To start the server we must use some clack tooling, it is common to create a start function that will instruct clack to bring a web server online and run our application.

(defun start (&key (server :woo) (address "127.0.0.1") (port 8000))
    (clack:clackup
     *app*
     :server server
     :address address
     :port port))

There's many different ways to configure clack, with hunchentoot, woo, and wookie, all being supported web servers, in my example I have settled on using woo and setting the address to be the standard loopback address of 127.0.0.1 and a port of 8000. The clackup function takes our application object and the key arguments and brings it all online for us.

Stopping The Server

Similarly to starting the server clack is also responsible for bringing the server down and cleaning up for us. clack:stop requires a running instance (the object returned by clackup) to shut down, and as such we will need to store the object returned by our start function somewhere in the event that we want to bring it down later.

(defun stop (instance)
    (clack:stop instance))

Running It

So, because we need to store the application somewhere you can include the following in your code as your develop in your editor, evaluating them as needed.

(defvar *instance* (start))
(stop *instance*)

NOTE: Should you be starting this from within the Common Lisp repl, please remember that the application will come online and immediately be shut down, and that the above two lines are shown for interactive development only and will now be shown in the complete listing below, and certainly should not be included in production!

Full Listing

(defpackage ningle-tutorial-project
  (:use :cl)
  (:export #:start
           #:stop))

(in-package ningle-tutorial-project)

(defvar *app* (make-instance 'ningle:app))

(setf (ningle:route *app* "/")
      (lambda (params)
        "Hello world"))

(defmethod ningle:not-found ((app ningle:<app>))
    (declare (ignore app))
    (setf (lack.response:response-status ningle:*response*) 404)
    "Not Found")

(defun start (&key (server :woo) (address "127.0.0.1") (port 8000))
    (clack:clackup
     *app*
     :server server
     :address address
     :port port))

(defun stop (instance)
    (clack:stop instance))

Conclusion

To recap, after working your way though this tutorial you should be able to:

  • Create a simple ningle application with a single route.
  • Create a 404 error handler.
  • Create start and stop functions utilizing clack to bring an application up/down.

Once the application is up and running you will be able to interact with the index view in your web browser, admittedly it isn't much right now, however this is only the beginnig of your journey.

I hope this has given you a taste for the basics of web development in Common Lisp, in the next tutorial we will look at using djula to allow us to create and re-use templates.

Github

The link for this tutorial is available here.

Resources

Neil MunroNingle Tutorial 2: Templates

· 2 days ago

Introduction

TODO

Joe MarshallInterpreting S-code

· 2 days ago

Lisp programs are not text but rather nested lists of symbols (atoms) and other lists. By default, a symbol represents a variable and a list represents a function call, but some lists are special forms that don’t follow the normal function call semantics.

S-code is an alternative representation for Lisp programs. Instead of nested lists, S-code is a set of nested data structures. Each S-code type corresponds to a Lisp form. There is an almost trivial isomorphism between S-code and the list representation of Lisp. S-code is a bit easier to interpret and compile than the list representation because the data structures representing the code can contain auxiliary information to aid in interpretation of the code. For example, a variable reference in S-code can contain the lexical depth and offset of the variable.

Converting from nested lists to S-code is a process called “syntaxing”. The nested lists are walked, the macros are expanded, auxiliary information is collected, and the S-code is generated. The S-code can be directly interpreted or compiled to machine code. “Unsyntaxing” is the process of converting S-code back to nested lists and it basically involves a walk of the data structure, discarding the auxiliary information, and collecting a tree of lists, thus recovering the original Lisp program (modulo macro expansion).

The MIT-Scheme hardware (Scheme 79, Scheme 81, and Scheme 86) directly executed S-code. MIT-CScheme is a C and assembly program that interprets S-code and runs compiled S-code. MzScheme and Racket also use a similar nested data structure represention of Scheme programs. I don’t think they call it “S-code”, but it is essentially the same thing. No doubt other Lisp and Scheme interpreters use this technique.

(The Lisp machine, by contrast, compiled Lisp programs to a byte-code that was interpreted by the microcoded Lisp machine hardware. I believe that MzScheme and Racket now include a JIT compiler.)

A few years back, for kicks, I wrote an S-code interpreter for Scheme. I decided to write it in C# so that I could leverage the .NET Common Language Runtime. I wouldn’t have to write a garbage collector, for example. I wrote it to interpret the S-code generated by MIT-Scheme. This way I could leverage the MIT-Scheme runtime as well. But this interpreter is not simply a port of the C implementation of MIT-Scheme. Many parts of the interpreter work completely differently from the MIT C implementation. Among them:

  • The C# call stack is used to hold the continuations. Subproblem continuations are C# continuations. Nested Scheme function calls appear as nested C# function calls, so the C# debugger can be used to debug Scheme code.
  • Tail recursion is handled by a trampoline at each function call boundary. To tail call a function, you return the function, the arguments, and a special marker to the caller’s trampoline, which will make the call on your behalf.
  • First class continuations are handled by lightweight stack inspection. In addition to a special marker for tail recursion, there is also a special marker for continuation capture. If this is returned, the caller’s trampoline will evacuate its current stack frame from the stack onto the heap and propagate the special marker up the stack.
  • Eval dispatch is handled through virtual method dispatch. Each type of S-code object has an eval method that is specialized for the type. I was curious if the virtual method dispatch was fast enough to be in the main interpreter loop.
  • The S-code interpreter is instrumented to discover the “hot” paths through the code. Certain patterns of S-code that are determined to be dynamically frequent can be replaced by S-code “superoperators” that provide specialized higher performance evaluation paths that avoid recursive evaluation steps.
  • Variable assignments are eliminated through cell conversion. All variables are ultimately immutable. Variables that were assigned to are converted to mutable cells that are allocated on the heap. This means we can use a flattened environment structure, but calls to procedures with mutable variables will cons.
  • The nested environment structure was replaced with flattened environment vectors. Shared immutable lexical variables are copied, and since mutable variables are stored in cells, their references are copied. Lexical variable lookup is constant independent of the lexical depth, but closure construction becomes linear in the number of variables being closed over.

Limitations

C# does not provide a way to dump a heap snapshot, so it isn’t possible to save a Lisp image. Instead, the interpreter is “cold loaded” each time it is started. Indeed, the main problem with the interpreter is the startup time. It spends an inordinate amount of time initializing the unicode tables.

The major version of MIT-Scheme has been incremented. The C# interpreter fails to reach the prompt when cold loading the latest version. I haven’t tracked down the problem.

The C# interpreter assumes that the S-code abstraction is completely opaque, i.e. that the Scheme code knows nothing about the binary layout of S-code objects and only manipulates them through the provided primitives. This isn’t always the case, however, so the C# intepreter sometimes has to recognize that certain idiomatic patterns of S-code are actually attempting to emulate a primitive operation. For example, system-pair-car is a primitive that extracts the first element of any system object that has two elements, i.e. objects that MIT-Scheme implements as a cons cell even though not tagged as one. But the point of using C# was to hide the representation of Scheme objects from the Scheme code, so many objects are no longer implemeted as cons cells. The interpreter notices calls to system-pair-car and instead extracts whatever field of the object that MIT-Scheme would have put in the car.

The interpreter is not complete enough to host the SF program, which creates Scheme .fasl files from Scheme source. You need MIT-Scheme to create the .fasl files, which can then be loaded into the C# interpreter.

Now what?

I got increasingly frustrated with the limitations. It was taking too long to do a full debugging cycle. Playing whack-a-mole with the bugs was getting tedious.

I wanted to understand fundamentally why interpreters are so slow. In theory, shouldn’t the interpreter be able to perform the same operations as the compiled code? What exactly is “interpreter overhead”? Shouldn’t it be able to run in the same ballpark as the compiled code? What would it take to make a high-performance interpreter?

I discovered some of the answers. A large part of the overhead comes from instruction dispatch. Compiled code has its instructions dispatched in hardware and often have dedicated hardware to read and process the instruction stream. Interpreters use software to do the dispatch. You can make the interpreter faster by combining instructions and dispatching instruction combinations. There is a combinatorical explosion of instruction combinations, however, so you only want to combine the most common instruction sequences.

I wrote a number of hand-coded instruction combinations, but it got tedious and I realized that I wanted an automatic way to generate instruction combinations for common instruction sequences. That is, I wanted a JIT compiler. I put the project on the shelf at this point because a JIT compiled S-code interpreter is another project in itself.

I decided to blog a bit about some of the more unusual features of this interpreter, so watch this space if you are interested.

Joe MarshallComposing Binary Functions

· 5 days ago

Functions that take one argument and return one value are cool because you can make pipelines out of them. You just send the output of one function into the input of the next. Of course the output of the first function must be of a type that the second function can accept. Or you can just stick with functions that take and return the same type, and then you can compose these as desired.

But what if you want to compose binary functions that take two arguments and return two values? There are a couple of ways to take multiple arguments: you can pass them in a list (or vector), or you can pass the arguments one at a time by curry the function.

(defun curry (B)
  (lambda (l)
    (lambda (r)
      (funcall B l r))))

There are two lambdas here. The outer lambda accepts the left argument and returns the inner lambda. The inner lambda accepts the right argument and calls the original binary function with both arguments.

To call a curried function, you first call it with the left argument. This will return a lambda that you call with the right argument.

(funcall (funcall curried-binary-function left) right)

The two-argument identity function would be curried like this:

(defun curried-values (left)
  (lambda (right)
    (values left right)))

Composing two curried functions is a little complicated. Let us suppose that F and G are curried binary functions. Further assume that G has already been applied to its left argument, i.e., it is an inner lambda of a curried binary function. We want to compute the inner lambda of F composed with G.

(defun curried-compose (f g)
  (lambda (right)  ; return an inner lambda
    (multiple-value-bind (vall valr) (funcall g right)
      (funcall (funcall f vall) valr))))

Here is how it works. We return an inner lambda that accepts the right argument of the composition. We call the inner lambda G on this argument and collect the two values it returns. We then pass the two values one at a time to F.

If we did this right, we should end up with these identities:

(curried-compose #’F (curried-values left))
    ==> (F left)

and

(curried-compose #’curried-values G-inner)
    ==> G-inner

furthermore, curried-compose should be associative:

(curried-compose (curried-compose F G) H)
    ==> (curried-compose F (curried-compose G H))

Let’s try it out.

(defun curried-cons (left)
  (lambda (right)
    (cons left right)))

;; Check first identity.
(let ((x (curried-compose #’curried-cons (curried-values ’car))))
  (assert (equal (funcall x ’cdr) ’(car . cdr))))

;; Check second identity.
(let ((x (curried-compose #’curried-values (curried-cons ’car))))
  (assert (equal (funcall x ’cdr) ’(car . cdr))))

;; Check associativity.
(defun curried-sum-difference (left)
  (lambda (right)
    (values (+ left right) (- left right))))

(let ((x (curried-compose
           #’curried-cons
           (curried-compose
             #’curried-sum-difference
             (curried-values 7))))
      (y (curried-compose
           (lambda (v)
             (curried-compose #’curried-cons
               (curried-sum-difference v)))
            (curried-values 7))))
  (assert (equal (funcall x 3) (funcall y 3))))

If the binary function is closed over its right argument, then when you curry it, the inner lambda will be closed over its argument. We should be able to compose various inner lambdas to make a pipeline. The pipeline will daisy chain the right argument from one curried function in the pipeline to the next.

Pipelines like this have two properties we want to exploit: first, they force an order of execution on the functions in the pipeline. The earlier stages in the pipeline have to produce an answer before the later stages consume it, even if all the functions in the pipeline are lazy pure functional. Second, we can abstract out the argument we are daisy chaining through the stages.

Let’s use this in a real problem. Our task is to build a binary tree where each node has a serial number. The serial number will be our right hand argument which is daisy chained through the computation.

(defun next-sn (sn)
   (values sn (1+ sn)))

(defun curried-make-node (depth children)
  (curried-compose
    (lambda (sn)
      (curried-values (cons (cons depth sn) children)))
    #’next-sn))

(defun curried-make-tree (depth)
  (if (zerop depth)
      (curried-make-node depth nil)
      (curried-compose
        (lambda (left-branch)
          (curried-compose
            (lambda (right-branch)
              (curried-make-node depth (list left-branch right-branch)))
            (curried-make-tree (1- depth))))
        (curried-make-tree (1- depth)))))

curried-make-tree returns a curried function. It hasn’t built a tree, but built a function that builds the tree once it gets the serial number passed in.

Notice these two things: we have no global state or assignment, but we get a serial number that is incremented for each node. The serial number is passed around as the curried right hand argument and returned as the right hand value. Notice, too, that curried-make-tree has no mention of the serial number. It is a hidden variable.

curried-compose is reminiscent of a let expression, so we can create some analagous syntactic sugar:

(defmacro LetM (((var val) &rest; more) &body; body)
  ‘(curried-compose
     (lambda (,var)
       ,@(if more
            ‘((LetM ,more ,@body))
            body))
     ,val))

(defun curried-make-tree (depth)
  (if (zerop depth)
      (curried-make-node depth nil)
      (LetM ((left-branch (curried-make-tree (1- depth)))
             (right-branch (curried-make-tree (1- depth))))
        (curried-make-node depth (list left-branch right-branch)))))

There is a special name for the inner lambda that is expecting the right hand argument. It is a “monad”. curried-values is the monad “unit”, and curried-compose is the monad “bind”.

As I mentioned in a previous post, it makes more sense to use this approach in a lazy functional language. The chain of curried functions force sequential evaluation because the output of each curried function must be computed before it becomes the input of the next curried function. The right hand argument that is threaded through the curried calls can be used as a “store” that is (functionally) updated by each curried function before passing it along, giving you the illusion of mutable storage.

A monad is how you embed an imperative language within a pure, lazy, functional language. But in a mixed language like Lisp, you can force sequential evaluation by progn and the store is actually mutable, so there is less of a need for monads. Nonetheless, they provide a way to “hide” a variable that is threaded through the curried functions and that can come in handy. For example, if you are doing I/O, you can hide the stream variable so you don’t have to pass it along to every intermediate function.

Joe MarshallDynamic Variables in Go

· 5 days ago

Dynamic binding is an evaluation rule for free variables. The rule is that the value of a free variable in a lambda expression is the value bound in the nearest dynamically enlosing scope. This is in contrast to static binding, where the value is the value bound in the nearest lexically enclosing scope. A naive interpreter that saves variable bindings in a stack frame will usually end up implementing dynamic binding.

In general, static binding is preferred because it is easier to reason about. However, dynamic binding is sometimes very useful in handling context. Lisp provides both static and dynamic binding, with the default being static binding.

Golang provides lexical binding, but not dynamic binding. But you can mimic the effect of dynamic binding by saving the variable's value, assigning it a new value, then restoring the old value in a defer statement. But this trick won't work in a multi-threaded environment. Each thread has to manage its own value of the variable that is separate from the values in other threads.

But what if the implementation doesn't tightly bind continuations to threads? What if the implementation uses a thread pool, and the same thread is used for different goroutines at different times? In that case, the trick of saving and restoring the value of a variable won't work. The value of the variable will be shared among different goroutines, and the value will be unpredictable. To make things work, your dynamic variables should be continuation-local.

But golang doesn't provide continuation-local variables, and it seems to go out of its way to keep code from inspecting the current continuation. There is a trick that you can use.

We need a way to allocate synthetic continuation IDs and bind them to the chain of pending calls. We also need a way that a subroutine can inspect the current chain of pending calls and determine the synthetic continuation ID.

Allocating a synthetic continuation ID is easy enough by maintaining a bitmap of free continuation IDs. When we need a continuation ID, we grab a mutex and search the bitmap for an unused continuation ID. We set the bit to allocate the ID and free the mutex. To free the continuation ID, we reset the bit. Any time you want to use dynamic variables, you need to allocate a synthetic continuation ID early on. This will be used as an index into a value table for each dynamic variable.

var (
	continuationIdInUse [MaxContinuations]bool = [MaxContinuations]bool{}
	continuationIdMutex sync.Mutex       = sync.Mutex{}
)

func AllocateContinuationId[T any](receiver func(tid uint8) T) T {
	continuationIdMutex.Lock()

	for tid, inUse := range continuationIdInUse {
		if (tid > 0) && !inUse {
			continuationIdInUse[tid] = true
			continuationIdMutex.Unlock()
			return DeallocatingContinuationId(uint8(tid), receiver)
		}
	}

	panic("No available continuation ids")
}

func DeallocateContinuationId(tid uint8) {
	continuationIdInUse[tid] = false
}

func DeallocatingContinuationId[T any](tid uint8, receiver func(tid uint8) T) T {
	defer DeallocateContinuationId(tid)
	return receiver(tid)
}

Golang doesn’t offer much in the way of stack introspection or manipulation, but you can easily get a backtrace of the names of the pending function calls. So we’ll encode the synthetic continuation ID as a nested set of function calls. When we need to read the current continuation ID, we get a backtrace of the stack and look for the special nested set of function calls and decode them.

func Mark0[T any](count int, value uint8, thunk func() T) T {
	if count == 0 {
		return thunk()
	} else if value%2 == 0 {
		return Mark0(count-1, value/2, thunk)
	} else {
		return Mark1(count-1, (value-1)/2, thunk)
	}
}

func Mark1[T any](count int, value uint8, thunk func() T) T {
	if count == 0 {
		return thunk()
	} else if value%2 == 0 {
		return Mark0(count-1, value/2, thunk)
	} else {
		return Mark1(count-1, (value-1)/2, thunk)
	}
}

func MarkByte[T any](value uint8, thunk func() T) T {
	if value%2 == 0 {
		return Mark0(7, value/2, thunk)
	} else {
		return Mark1(7, (value-1)/2, thunk)
	}
}

When we call MarkByte on a byte, we’ll make a series of eight nested function calls to Mark0 or Mark1 depending on the bits in the byte. We can use the golang backtrace mechanism to find the nested function calls and determine what the byte was:

func ReadMark() int {
	pc := make([]uintptr, 256)
	n := runtime.Callers(0, pc)

	frames := runtime.CallersFrames(pc[:n])
	value := 0
	for {
		frame, more := frames.Next()
		if !more {
			break
		}

		if strings.Contains(frame.Function, "Mark0") {
			value *= 2
		} else if strings.Contains(frame.Function, "Mark1") {
			value = value*2 + 1
		}
	}
	return value
}

Now when we spin off a goroutine that will use dynamic variables, we must be careful to call WithThread some time early on in the goroutine:

func MakeMyGoroutineEntry () func () {
        return func () {
                _ = WithThread(func () int {
                        // There are 8 pending calls to Mark[0,1]
                        // And we can use ReadMark() to get the
                        // represented value
                        
                        // ... body of callback ...
                   })
         }
}

  // ... some code somewhere ...
  myGoroutine := MakeMyGoroutineEntry()
  go myGoroutine()

Now we can use the continuation ID to index into a table of values:

type GlVal[T any] struct {
	values [MaxThreads]T
}

func NewGlVal[T any]() *GlVal[T] {
	return &GlVal[T]{}
}

We read the goroutine local variable by indexing into the table:

func GlValGet[T any](g *GlVal[T]) T {
	return g.values[ReadMark()]
}

We write the golang local variable by writing to the table. We return the old value so that we can restore it later. Note, no mutex is needed because the value is thread-local:

func GlValSet[T any](g *GlVal[T], value T) T {
	mark := ReadMark()
	oldValue := g.values[mark]
	g.values[mark] = value
	return oldValue
}

When we want to dynamically bind the variable, we save the old value to a frame local variable, set the new value, and defer a function to restore the old value:

func GlValBind[T, U any](g *GlVal[T], value T, thunk func() U) U {
	oldValue := GlValSet(g, value)
	defer GlValSet(g, oldValue)
	return thunk()
}

This technique seems reasonably robust despite using the debugging support for its implementation. It will fail if they ever make golang tail-recursive, but I doubt that will happen any time soon.

Joe MarshallI Don&rsquo;t Use Monads

· 6 days ago

I consider myself a “mostly functional” programmer. I think about programs in terms of data flow and transformations and mappings from input to output. I avoid side effects and mutable state where it isn’t necessary. But I’m not a purist. I’m happy to setf an instance slot when it makes sense.

Monads are the darling of the functional programming community. To hear it, they are the key to understanding the universe. They provide a way to encapsulate side effects and mutable state in a pure functional way.

But I don’t often use them.

There are a few reasons for this. I don’t work in Haskell, where monads are a ubiquitous part of the language. If you use Haskell, you’re going to use monads. But what about the languags that I do use? Monads of course “work” in any language that supports higher-order functions, but they are not always idiomatic or the obvious way to accomplish a task.

The primary use of monads is to mimic the look and feel of an imperative language, with its sequential execution, side effects and mutable state, in a language that is purely functional. Monads are a way to implement progn and setf in Haskell. But Lisp already has progn and setf.

And an imperative programming style is largely inferior to a functional programming style. I prefer to not think imperatively and write imperative code. I've seen many Haskell programs that used monads just so they could use do notation and write code that looked like C. That seems like a step backwards.

Monads are complicated. They essentially come down to functional composition of curried functions hidden behind some syntactic sugar. So long as you don’t have to look under the hood, they are reasonably easy to use. But when you do have to look under the hood, you’ll find a maze of twisty, turny, higher-order procedures. In Lisp, a well placed setf can cut through this functional Gordian knot.

vindarelCLOS tutorial: I published 9 videos (1h 22min) on my course. You'll know enough to read the sources of Hunchentoot or the Kandria game &#127909; &#11088;

· 12 days ago

This is a follow-up from yesterday’s post on reddit and an announce I wanted to make since this summer: I created 9 videos on CLOS, for a total of 1 hour and 22 minutes, in which you learn what I detail below. You can watch the course and subscribe here (Christmas coupon) and learn more on GitHub. The whole course is made of 51 videos divided in 9 chapters, for a total of 7 hours and 12 minutes. It is rated 4.71 / 5 as of date (thank you!!).

Yesterday was a great day because I received nice feedback:

It is an amazing tutorial. What is really strange is I thought CLOS was complicated. I guess it can be but [Vincent] is amazing at explaining everything and demystifying it.

   /u/intergalactic_llama

🔥 I appreciate any (constructive ;) ) feedback and positive ones a lot.

Oh hey you made that tutorial. I started it but then got distracted by other stuff, been meaning to restart it and make my way through the whole thing. Really liked what I went through (I was on video 12 about redefining functions locally etc).

   /u/runevault

Look, other recent feedback on my course:

I have done some preliminary Common Lisp exploration prior to this course but had a lot of questions regarding practical use and development workflows. This course was amazing for this! I learned a lot of useful techniques for actually writing the code in Emacs, as well as conversational explanations of concepts that had previously confused me in text-heavy resources. Please keep up the good work and continue with this line of topics, it is well worth the price!

   Preston, October 2024

 

The instructor shows lots of tricks.

   Tom, November 2024

 

Excellent selection of content. The delivery is not always obvious just for watching, but when I do the examples, it's absolutely clear that what I need to be learning has been presented.

   Steven, November 2024 <3

Table of Contents

Chapter content

1. defclass, make-instance, slots... aka CLOS crash course, part 1. This one is free to watch 🆓

We see in more details: defclass, make-instance, attributes (aka slots), slot options (initarg, initform, reader, writer, accessor, documentation), slot-value, generic functions, defmethod, dispatching on built-in types, how objects are lazily updated, Slime inspector actions, manipulating Slime presentations, unbound slots and slot-boundp, Slime shortcuts to create objects...

We see a LOT already in this video, in an efficient way (way more efficient than when I learned anyways), so if you’re on a budget you can start with it (it’s free to watch) and complement with the Cookbook, and the other free books. Also if you are a student shoot me an email (and avoid the reddit chat, I don’t see the notifications, sorry about that).

1b. Quizz: CLOS crash test

There is a small quizz. Keep in mind that the Udemy plateform doesn’t support any Lisp language so I can’t put any live coding exercises, but we can read code.

2. Inheritance, multimethods, around, before and after methods... aka CLOS crash course, part 2

what we see more precisely: inheritance, multimethods, :around, :before and :after methods (think signals and overwriting default methods in other languages, that allow to control what happens when a method is called, if it is called at all), their order of execution, a Slime shortcut to export all symbols of a class at once...

3. Pretty printing

We see how to change the default printed representation of objects.

What we see: print-object, with print-unreadable-object, the object type, the object identity, classic gotchas.

You know, normally an object is printed un-readable as

#<ROBOT {1005CEBD03}>

(guess what AOC day I am at)

and we can use the print-object method to print it however we like, such as

#<ROBOT x: 47 y: 14 {1005CEBD03}>

4. defclass review

We give another pass, slower, to defclass, slot options, make-instance, and to the fact that accessors are generic functions.

You can skip this one if the crash course was crystal clear.

5. Custom constructors and custom logic.

What we see: writing our own “make-person” terse constructor. Adding some logic before the object creation, doing side-effects after the object creation: towards initialize-instance.

6. initialize-instance: control if and how any objects are created

What we see: defining a :before and an :after method of initialize-instance for our person class, in order to do the same logic than with our custom constructor, but with a built-in CL Object System mechanism. Note that using INITIALIZE-INSTANCE isn’t a must, only a “can”, that you can use for your own classes, or to control the creation of objects from other systems.

7. Multiple inheritance

What we see: how to inherit from multiple parent classes and who takes precedence, when the parents define the same slot with each a default value. Quick illustration. We use what is known as a mixin class to add functionality to our class.

8. defgeneric vs defmethod: when to use which, which is better?

What we see: the use of defgeneric and defmethod, either separately, either together. defgeneric has a couple advantages in regards to documentation and keeping your code in sync with your image.

9. Class allocation

What we see: the default :allocation :instance VS :allocation :class. How to automatically count how many objects of a class are created.

8b. Quizz: reading code from real-world projects.

Outcome of the chapter

There was a lot of choices to make and advanced topics to ignore for this first chapter on CLOS. What drove my choices was looking at real-world code out there. As a result, by the end of this chapter, you will know enough to read real-world Common Lisp projects such as the Hunchentoot web server or the Kandria game. Bravo!

Closing words

First of all, thank you for your encouragements, and to everyone who took the course or who shared it!

Today I’d like to answer to my past me, a newcomer to Lisp on a budget: why create a paying course? First of all, I still contribute to the Cookbook, a collaborative resource. It’s not “free or paid” resources, it’s both. Then, preparing and recording structured videos takes so much time that I wouldn’t do this continuous effort if I hadn’t the ambition to make a non-ridiculous hourly rate on them one day. Disclaimer: it isn’t the case yet. Maybe next year, depending on how many videos I release ;) I can pay my rent with them once every few months though, that’s cool. Rest assured I’m not a millionaire. I’m on my own projects and I don’t have a fixed (nor big) income. So your contribution or sponsorship counts, if only for the good vibes that push me to spend more and more time on my growing list of projects.

You can sponsor other lispers too.

Thank you and happy lisping.

LispjobsMid/Senior Clojure Developers | Akosweb | Latam

· 13 days ago

Job posting: https://forms.gle/tWSRKLKDJkGXTLTG6

Looking for mid/senior-level Clojure developers who are experienced, self-managing, and ready to hit the ground running.

You will need to work on US Central Time (CST).

What You'll Be Doing:

  • Join an existing team on an active project to boost velocity and help meet goals.
  • Work directly with our client's project team (within one of their departments).
  • Follow their processes and systems—this isn't a project we're managing directly.
  • Collaborate with developers, adapt to their workflows, and bring your expertise to the table.
  • You will need to work US Central Time hours, likely 8am – 5pm (UTC -600)
  • You will need to have good english 

What We Offer:

  • Project Duration: At least 6 months, likely to extend to 1 year or longer.
  • Full-time role: You will need to track your hours for transparent reporting and payment. We’re developers ourselves, so we understand this can be a pain, but we like to be as transparent with the clients in favor of long-term relationship.
  • Support: We'll handle account management with the client and ensure you're paid on time.

What We Expect:

  • Experience Level: Only mid/senior developers—no exceptions.
  • Fluent English: Strong speaking and communication skills for working with the client's team.
  • Interviews: There will be 3 interview rounds for these roles (1) Screening interview with our team (2) a 30-minute chat with the Project Director and (3) 1-hour interview with the Team Developers.
  • Ongoing Evaluations: Weekly check-ins at the start, moving to bi-weekly later. This will also be a chance for you to share feedback, ensure you're happy, and confirm the role is a good fit.

If you're a skilled Clojure developer looking for your next role, apply today! 

We need to hire multiple Clojure Developers for this role, please let us know if you have any friends or colleagues who'd like to join the team too.

Joe MarshallDoes CONS take its arguments in the wrong order?

· 15 days ago

Object-oriented languages often come with a “collections” library that provides a variety of data structures for storing collections of objects. Methods on collections usually pass the collection as the first argument for two reasons: (1) the collection is the primary object being operated on, and (2) in many languages, method dispatch only works on the first argument.

In Lisp, however, lists are a collection type that is derived from a more primitive data type, the cons cell. Lists are not fully abstracted from cons cells and many list operations are actually defined as operations on cons cells. Rather than the list argument always coming first, there is little rhyme or reason to the order of arguments in list operations. This becomes evident when you try to use higher-order operations on lists.

cons constructs a cons cell, with two fields, a car and a cdr. The car is traditionally written to the left of the cdr, and cons takes the car as the first argument.

In Lisp, there is no special constructor for lists. The user directly constructs the underlying cons cell and simply “declares in his head” that the result is a list. Hence when we are extending a collection with an item, the collection is the second argument to cons rather than the first. In a “modern” curly-brace, object-oriented language, we would extend mylist by writing mylist.cons(item)

Here’s where it makes a difference:

The fold-left function iterates down a list and calls a combining function on each element of the list. The first argument to the combining function is the accumulated answer, the second is the element being accumulated. If we want to accumulate into a list, the accumulated list is passed first and the new item is passed second. This means that cons is not suitable as the combining function in fold-left because the arguments come in the wrong order:

(defun bad-reverse (list)
  (fold-left #’cons ’() list)) ;; does not reverse a list

(defun xcons (cdr car)  ;; takes original list first
  (cons car cdr))

(defun reverse (list)
  (fold-left #’xcons ’() list)) ;; correcty reverses a list

Many list operations, remove and adjoin for example, take their arguments in an inconvenient order for fold-left.

Joe MarshallHow to Blow an Interview

· 20 days ago

I recently had to interview a candidate. He had passed earlier interviews, had done well, and was in the final running. My only task was to determine his coding ability.

I checked resume, and saw that he listed Python among his computer language skills, so I decided to pitch him a Python program. I wrote a this simple skeleton of a Tic-Tac-Toe program:

class TicTacToeBoard:
    def __init__(self):
        pass

    def __str__(self):
        pass

    def make_move(self, row, col, player):
        pass

    def is_winner(self, player) -> bool:
        return False

    def is_full(self) -> bool:
        return False

if __name__ == '__main__':
    board = TicTacToeBoard()
    print(board)
    board.make_move(0, 0, 'X')
    board.make_move(0, 1, 'O')
    board.make_move(1, 1, 'X')
    board.make_move(1, 0, 'O')
    board.make_move(2, 2, 'X')
    print(board)
    print(board.is_winner('X'))
    print(board.is_winner('O'))
    print(board.is_full())

I invited him to “go to town” on the code. I didn’t care what he implemented or how he implemented it, I just wanted to watch him code.

Alas, he didn’t. He studied the code for a few minutes, moved the cursor around, and then admitted that it had been a while since he had written any significant Python code and he was a bit rusty. He was more used to Java.

That’s fine. I asked him to write the code in Java. I apologized for not having a Java skeleton, but I was sure he could write something.

He didn’t get very far. He wrote a class declaration, but was stumped on any method definitions. He admitted that he was pretty rusty in Java, too, and wanted to switch back to Python.

I didn’t care, I just wanted to see some code. He asked if the job was going to require a lot of coding. I said that it would definitely require some coding as part of the job was analyzing legacy code and identifying and fixing security bugs.

After half an hour, he still hadn’t written even a single line of working code in any language. I felt bad for him and asked him about the other experiences on his resume. They looked good, but a little digging showed that his experience was a bit shallow. I thanked him for his time and told him that the recruiter would be in touch.

I gave him a solid “no”. He just would not have been a good fit. The job required coding skills that he simply did not have.

I don’t understand it. I would have thought that the ability to write some code was a basic requirement for any job involving computers. But I guess not.


Nutanix is hiring. It’s a pretty good gig, so if you see something that interests you, apply. But be prepared to write some code.

vindarelThree web views for Common Lisp: build cross platform GUIs with Electron, WebUI or CLOG Frame

· 22 days ago

You dream to build a cross-platform GUI in Common Lisp? It’s now easy with web views.

Honestly GUIs are a difficult topic. Add in “cross platform” and you can spend your life trying out different solutions and hesitating between the best one for Common Lisp. It’s doable: Tk, Gtk3 and Gtk4, Qt4 and Qt5, CAPI (LispWorks), IUP, Nuklear, Cocoa, McCLIM, Garnet, Alloy, Java Swing... what can of worms do you want to open?

The situation improved in the last years thanks to lispers writing new bindings. So it’s possible you find one that works for your needs. That’s great, but now: you have to learn the GUI framework :p

If like me you already know the web, are developing a web app, and would like to ship a desktop application, web views are making it easy. I know the following ones, listed from least favourite to most favourite.

Table of Contents

Electron

Electron is heavy, but really cross-platform, and it has many tools around it. It allows to build releases for the three major OS from your development machine, its ecosystem has tools to handle updates, etc.

Advise: study it before discarding it.

Ceramic (old but works)

Ceramic is a set of utilities around Electron to help you build an Electron app: download the npm packages, open a browser window, etc.

Here’s its getting started snippet:

;; Start the underlying Electron process
(ceramic:start)
;; ^^^^^ this here downloads ±200MB of node packages under the hood.

;; Create a browser window
(defvar window (ceramic:make-window :url "https://www.google.com/"
                                    :width 800
                                    :height 600))

;; Show it
(ceramic:show window)

When you run (ceramic:bundle :ceramic-hello-world) you get a .tar file with your application, which you can distribute. Awesome!

But what if you don’t want to redirect to google.com but open your own app? You just build your web app in CL, run the webserver (Hunchentoot, Clack...) on a given port, and you’ll open localhost:[PORT] in Ceramic/Electron. That’s it.

Ceramic wasn’t updated in five years as of date and it downloads an outdated version of Electron by default (see (defparameter *electron-version* "5.0.2")), but you can change the version yourself.

The new Neomacs project, a structural editor and web browser, is a great modern example on how to use Ceramic. Give it a look and give it a try!

What Ceramic actually does is abstracted away in the CL functions, so I think it isn’t the best to start with. We can do without it to understand the full process, here’s how.

Electron from scratch

Here’s our web app embedded in Electron:

Our steps are the following:

You can also run the Lisp web app from sources, of course, without building a binary, but then you’ll have to ship all the lisp sources.

main.js

The most important file to an Electron app is the main.js. The one we show below does the following:

  • it starts Electron
  • it starts our web application on the side, as a child process, from a binary name, and a port.
  • it shows our child process’ stdout and stderr
  • it opens a browser window to show our app, running on localhost.
  • it handles the close event.

Here’s our version.

console.log(`Hello from Electron 👋`)

const { app, BrowserWindow } = require('electron')

const { spawn } = require('child_process');

// FIXME Suppose we have our app binary at the current directory.

// FIXME This is our hard-coded binary name.
var binaryPaths = [
    "./openbookstore",
];

// FIXME Define any arg required for the binary.
// This is very specific to the one I built for the example.
var binaryArgs = ["--web"];

const binaryapp = null;

const runLocalApp = () => {
    "Run our binary app locally."
    console.log("running our app locally...");
    const binaryapp = spawn(binaryPaths[0], binaryArgs);
    return binaryapp;
}

// Start an Electron window.

const createWindow = () => {
  const win = new BrowserWindow({
    width: 800,
    height: 600,
  })

  // Open localhost on the app's port.
  // TODO: we should read the port from an environment variable or a config file.
  // FIXME hard-coded PORT number.
  win.loadURL('http://localhost:4242/')
}

// Run our app.
let child = runLocalApp();

// We want to see stdout and stderr of the child process
// (to see our Lisp app output).
child.stdout.on('data', (data) => {
  console.log(`stdout:\n${data}`);
});

child.stderr.on('data', (data) => {
  console.error(`stderr: ${data}`);
});

child.on('error', (error) => {
  console.error(`error: ${error.message}`);
});

// Handle Electron close events.
child.on('close', (code) => {
  console.log(`openbookstore process exited with code ${code}`);
});

// Open it in Electron.
app.whenReady().then(() => {
    createWindow();

    // Open a window if none are open (macOS)
    if (process.platform == 'darwin') {
        app.on('activate', () => {
            if (BrowserWindow.getAllWindows().length === 0) createWindow()
        })
    }
})


// On Linux and Windows, quit the app main all windows are closed.
app.on('window-all-closed', () => {
    if (process.platform !== 'darwin') {
        app.quit();
    }
})

Run it with npm run start (you also have an appropriate packages.json), this gets you the previous screenshot.

JS and Electron experts, please criticize and build on it.

Missing parts

We didn’t fully finish the example: we need to automatically bundle the binary into the Electron release.

Then, if you want to communicate from the Lisp app to the Electron window, and the other way around, you’ll have to use the JavaScript layers. Ceramic might help here.

(this section was first released here: https://dev.to/vindarel/common-lisp-gui-with-electron-how-to-28fj)

What about Tauri?

Bundling an app with Tauri will, AFAIK (I just tried quickly), involve the same steps than with Electron. Tauri might still have less tools for it.

WebUI

WebUI is a new kid in town. It is in development, it has bugs. You can view it as a wrapper around a browser window (or webview.h).

However it is ligthweight, it is easy to build and we have Lisp bindings.

A few more words about it:

Use any web browser or WebView as GUI, with your preferred language in the backend and modern web technologies in the frontend, all in a lightweight portable library.

  • written in pure C
  • one header file
  • multi-platform & multi-browser
  • cross-platform webview
  • we can call JS from Common Lisp, and call Common Lisp from JS.

Think of WebUI like a WebView controller, but instead of embedding the WebView controller in your program, which makes the final program big in size, and non-portable as it needs the WebView runtimes. Instead, by using WebUI, you use a tiny static/dynamic library to run any installed web browser and use it as GUI, which makes your program small, fast, and portable. All it needs is a web browser.

your program will always run on all machines, as all it needs is an installed web browser.

Sounds compelling right?

The other good news is that Common Lisp was one of the first languages it got bindings for. How it happened: I was chating in Discord, mentioned WebUI and BAM! @garlic0x1 developed bindings:

thank you so much! (@garlic0x1 has more cool projects on GitHub you can browse. He’s also a contributor to Lem)

Here’s a simple snippet:

(defpackage :webui/examples/minimal
  (:use :cl :webui)
  (:export :run))
(in-package :webui/examples/minimal)

(defun run ()
  (let ((w (webui-new-window)))
    (webui-show w "<html>Hello, world!</html>")
    (webui-wait)))

I would be the happiest lisper in the world if I didn’t have an annoying issue. See #1. I can run my example just fine, but nothing happens the second time :/ I don’t know if it’s a WebUI thing, the bindings, my system, my build of WebUI... so I’ll give this more time.

Fortunately though, the third solution of this blog post is my favourite! o/

CLOG Frame (webview.h for all)

CLOG Frame is part of the CLOG framework. However, it is not tied to CLOG... nor to Common Lisp!

CLOG Frame is a short C++ program that builds an executable that takes an URL and a PORT as CLI parameters and opens a webview.h window.

It’s easy to build and works just fine.

It’s a great approach. We don’t need to develop CFFI bindings for webview.h. However such bindings would still be nice to have. I did a cursory search and didn’t find a project that seems to work. But please don’t take my word on it. Do you want to try this latest cl-webview, or have a go at the bindings?

Back to our matter.

This is CLOG Frame: 20 lines!

#include <iostream>
#include <sstream>
#include <string>
#include "webview.h"

int main(int argc,char* argv[]) {
  webview::webview w(true, nullptr);
  webview::webview *w2 = &w;
  w.set_title(argv[1]);
  w.set_size(std::stoi(argv[3]), std::stoi(argv[4]), WEBVIEW_HINT_NONE);
  w.bind("clogframe_quit", [w2](std::string s) -> std::string {
    w2->terminate();
    return "";
  });
  std::ostringstream o;
  o << "http://127.0.0.1:" << argv[2];
  w.navigate(o.str());
  w.run();
  return 0;
}

Compile it on GNU/Linux like this and don’t you worry, it takes a fraction of a second:

c++ clogframe.cpp -ldl `pkg-config --cflags --libs gtk+-3.0 webkit2gtk-4.0` -o clogframe

(see its repo for other platforms)

this gives you a clogframe binary. Put it in your $PATH or remember its location. It’s just a short C++ binary, so it weights 197Kb.

Now, back to your web app that you wrote in Common Lisp and that is waiting to be shipped to users.

Start your web app. Say it is started on port 4284.

From the Lisp side, open a CLOG Frame window like this

(uiop:launch-program (list "./clogframe"
                           "Admin"
                           (format nil "~A/admin/" 4284)
                           ;; window dimensions (strings)
                           "1280" "840"))

and voilà.

A CLOG Frame window showing a WIP Common Lisp web app on top of Emacs.

Now for the cross-platform part, you’ll need to build clogframe and your web app on the target OS (like with any CL app). Webview.h is cross-platform. Leave us a comment when you have a good CI setup for the three main OSes (I am studying 40ants/ci and make-common-lisp-program for now).

CLOG Frame should be a (popular) project on its own IMO! (@dbotton might make it independant eventually)

Please share any experience you might have on the topic 👍

Patrick SteinRay Tracing Extra-dimensional CSG Objects

· 32 days ago

This month, I added CSG (Constructive Solid Geometry) operators to the ray-tracer that I mentioned in the previous post. I added intersections, complements, and unions.

You can find the source code in my weekend-raytracer github repo.

Rendering of the union of three 5d-balls.

The post Ray Tracing Extra-dimensional CSG Objects first appeared on nklein software.

vindarelcl-ansi-term: print tables with style, and other script utilities

· 40 days ago

I am not the original author of cl-ansi-term, but I revived it lately. In particular, I added useful stuff to print data in tables:

  • print list of lists (where the first one is the list of headers)
  • print horizontal or vertical tables
    • the header keys are either the first row, either the first column
  • print hash-tables, plists, alists
  • filter keys to display (include, exclude)
  • limit the number of columns
  • they can be styled:
    • with or without borders
    • choose the columns’ width
    • choose the borders’ elements (“-|+”)
    • choose the headers’ and the cells’ style (color, bold...).

For example:

(progn
  (defparameter d (serapeum:dict :a 1.1 :b 2.2 :c 3.3))

  (banner "A single hash-table")
  (table d)

  (banner "A single hash-table, in columns")
  (vtable d)

  (banner "A single hash-table, ignoring column :B")
  (table d :exclude :b)

  (banner "A single hash-table, vertically ignoring column :B")
  (vtable d :exclude :b)

  (banner "A list of hash-tables")
  (table (list d d d))

  (banner "A list of hash-tables, ignoring column :B")
  (table (list d d d) :keys '(:a :c))

  (banner "A list of hash-tables, in columns")
  (vtable (list d d d))

  (banner "same, ignoring the column :b")
  (vtable (list d d d) :exclude :b))

prints:

--------------------------------------------------------------------------------
     A single hash-table
--------------------------------------------------------------------------------


+---------+---------+---------+
|A        |B        |C        |
+---------+---------+---------+
|1.1      |2.2      |3.3      |
+---------+---------+---------+

--------------------------------------------------------------------------------
     A single hash-table, in columns
--------------------------------------------------------------------------------


+---------+---------+
|A        |1.1      |
+---------+---------+
|B        |2.2      |
+---------+---------+
|C        |3.3      |
+---------+---------+

--------------------------------------------------------------------------------
     A single hash-table, ignoring column :B
--------------------------------------------------------------------------------


+---------+---------+
|A        |C        |
+---------+---------+
|1.1      |3.3      |
+---------+---------+

--------------------------------------------------------------------------------
     A single hash-table, vertically ignoring column :B
--------------------------------------------------------------------------------


+---------+---------+
|A        |1.1      |
+---------+---------+
|C        |3.3      |
+---------+---------+

--------------------------------------------------------------------------------
     A list of hash-tables
--------------------------------------------------------------------------------


+---------+---------+---------+
|A        |B        |C        |
+---------+---------+---------+
|1.1      |2.2      |3.3      |
+---------+---------+---------+
|1.1      |2.2      |3.3      |
+---------+---------+---------+
|1.1      |2.2      |3.3      |
+---------+---------+---------+

--------------------------------------------------------------------------------
     A list of hash-tables, ignoring column :B
--------------------------------------------------------------------------------


+---------+---------+
|A        |C        |
+---------+---------+
|1.1      |3.3      |
+---------+---------+
|1.1      |3.3      |
+---------+---------+
|1.1      |3.3      |
+---------+---------+

--------------------------------------------------------------------------------
     A list of hash-tables, in columns
--------------------------------------------------------------------------------


+---------+---------+---------+---------+
|A        |1.1      |1.1      |1.1      |
+---------+---------+---------+---------+
|B        |2.2      |2.2      |2.2      |
+---------+---------+---------+---------+
|C        |3.3      |3.3      |3.3      |
+---------+---------+---------+---------+

--------------------------------------------------------------------------------
     same, ignoring the column :b
--------------------------------------------------------------------------------


+---------+---------+---------+---------+
|A        |1.1      |1.1      |1.1      |
+---------+---------+---------+---------+
|C        |3.3      |3.3      |3.3      |
+---------+---------+---------+---------+

or again

TERM> (table (list d d d) :exclude :b  :border-style nil)
A         C
1.1       3.3
1.1       3.3
1.1       3.3

Real example

Remember, the scripts I use in production. I’m usually fine with big data output in the REPL, until:

  • until I want a cleaner output in the production script, so I can see quicker what’s going on.
  • when I want to filter and study the data a bit more.

In this case I extract data from my DB and I get a list of plists:

((:|isbn| "3760281971082" :|quantity| -1 :|price| 12.8d0 :|vat| NIL
  :|distributor| NIL :|discount| NIL :|type_name| NIL :|type_vat| NIL
  :|price_bought| NIL :|price_sold| 12.8d0 :|quantity_sold| 1 :|sold_date|
  "2024-04-03 09:27:12")
 (:|isbn| "9791094298169" :|quantity| 4 :|price| 15.0d0 :|vat| NIL
  :|distributor| NIL :|discount| NIL :|type_name| "book" :|type_vat| NIL
  :|price_bought| NIL :|price_sold| 15.0d0 :|quantity_sold| 1 :|sold_date|
  "2024-04-03 10:06:58")
 ...)

With the table and vtable functions, I can explore data in a clearer fashion.

(uiop:add-package-local-nickname :sera :serapeum)
(term:table (sera:take 15 *sells*)
                          :keys '(:|isbn| :|quantity| :|price|)
                          :plist t
                          :column-width '(15 10 10))
+--------------+---------+---------+
|isbn          |quantity |price    |
+--------------+---------+---------+
|3760281971082 |-1       |12.8d0   |
+--------------+---------+---------+
|9791094298169 |4        |15.0d0   |
+--------------+---------+---------+
|3700275724249 |-126     |2.8d0    |
+--------------+---------+---------+
|9782372600842 |1        |10.0d0   |
+--------------+---------+---------+
|9782372600736 |0        |10.0d0   |
+--------------+---------+---------+
|9782221256770 |1        |19.0d0   |
+--------------+---------+---------+
|3700275734392 |171      |3.95d0   |
+--------------+---------+---------+
|3662846007789 |2        |16.95d0  |
+--------------+---------+---------+
|9782368292907 |1        |8.95d0   |
+--------------+---------+---------+
|9782095022679 |1        |12.95d0  |
+--------------+---------+---------+
|3662846007871 |5        |5.9d0    |
+--------------+---------+---------+
|9782092588949 |2        |5.95d0   |
+--------------+---------+---------+
|3700275724249 |-126     |2.8d0    |
+--------------+---------+---------+
|3700275734392 |171      |3.95d0   |
+--------------+---------+---------+
|3770017095135 |0        |29.99d0  |
+--------------+---------+---------+

Yes, this calls for more features: align the numbers, automatically adapt the cells’ width (DONE), style cells individually (DONE), etc.

(I’m sure we could have an explorer window, watching for changes, displaying data in a real table with interactive features... I can feel we’re close... CLOG frame and malleable systems someone?)

Use case and other primitives: title, banner, vspace, o-list

The use case is cleaner output for scripts.

Other libraries exist with other goals:

Here are some of other cl-ansi-term’s utilities:

ordered and un-ordered lists:

(term:o-list '((:one one-a (:one-b :one-b-1 :one-b-2)) :two))
1. ONE
   1. ONE-A
   2. ONE-B
      1. ONE-B-1
      2. ONE-B-2
2. TWO

Horizontal lines

(term:hr :filler "=")
================================================================================

printing stuff, align on screen:

(term:cat-print '(:abc :def :ghi) :align :center)
;; =>

                                   ABCDEFGHI

vspace for vertical space (default: 3 newlines)

banner:

(banner "My title" :space 1)

--------------------------------------------------------------------------------
     My title
--------------------------------------------------------------------------------


Stylesheets and colorized text

The library allows to use styles.

Start by defining your stylesheet.

(term:update-style-sheet
 '((:header :cyan   :underline)
   (:mark   :red    :reverse)
   (:term   :yellow :bold)))

:header, :mark and :term are now your own vocabulary. Anytime you use functions that accept a style, reference them.

Example:

(term:table (list '(:name :age) '(:me 7)) :header-style :header)
data printed in tables, with colors.

To see colors in a “dumb” terminal like in Emacs Slime, install the package slime-repl-ansi-color, “require” it and enable it ith M-x slime-repl-ansi-color-mode.

You can also disable styles in non-interactive terminals with term::*enable-effects-on-dumb-terminals*.

Happy lisping.

TurtleWareDynamic Vars - Return of the Jedi

· 58 days ago

Table of Contents

  1. The protocol
  2. Control operators
  3. Synchronized hash tables with weakness
  4. First-class dynamic variables
    1. STANDARD-DYNAMIC-VARIABLE
    2. SURROGATE-DYNAMIC-VARIABLE
  5. Thread-local variables
    1. The protocol
    2. The implementation
  6. Thread-local slots
  7. What can we use it for?

In the previous two posts I've presented an implementation of first-class dynamic variables using PROGV and a surrogate implementation for SBCL.

Now we will double down on this idea and make the protocol extensible. Finally we'll implement a specialized version of dynamic variables where even the top level value of the variable is thread-local.

The protocol

Previously we've defined operators as either macros or functions. Different implementations were protected by the feature flag and symbols collided. Now we will introduce the protocol composed of a common superclass and functions that are specialized by particular implementations.

Most notably we will introduce a new operator CALL-WITH-DYNAMIC-VARIABLE that is responsible for establishing a single binding. Thanks to that it will be possible to mix dynamic variables of different types within a single DLET statement.

(defclass dynamic-variable () ())

(defgeneric dynamic-variable-bindings (dvar))
(defgeneric dynamic-variable-value (dvar))
(defgeneric (setf dynamic-variable-value) (value dvar))
(defgeneric dynamic-variable-bound-p (dvar))
(defgeneric dynamic-variable-makunbound (dvar))
(defgeneric call-with-dynamic-variable (cont dvar &optional value))

Moreover we'll define a constructor that is specializable by a key. This design will allow us to refer to the dynamic variable class by using a shorter name. We will also define the standard class to be used and an matching constructor.

(defparameter *default-dynamic-variable-class*
  #-fake-progv-kludge 'standard-dynamic-variable
  #+fake-progv-kludge 'surrogate-dynamic-variable)

(defgeneric make-dynamic-variable-using-key (key &rest initargs)
  (:method (class &rest initargs)
    (apply #'make-instance class initargs))
  (:method ((class (eql t)) &rest initargs)
    (apply #'make-instance *default-dynamic-variable-class* initargs))
  (:method ((class null) &rest initargs)
    (declare (ignore class initargs))
    (error "Making a dynamic variable that is not, huh?")))

(defun make-dynamic-variable (&rest initargs)
  (apply #'make-dynamic-variable-using-key t initargs))

Control operators

Control operators are the same as previously, that is a set of four macros that consume the protocol specified above. Note that DYNAMIC-VARIABLE-PROGV expands to a recursive call where each binding is processed separately.

(defmacro dlet (bindings &body body)
  (flet ((pred (binding)
           (and (listp binding) (= 2 (length binding)))))
    (unless (every #'pred bindings)
      (error "DLET: bindings must be lists of two values.~%~
              Invalid bindings:~%~{ ~s~%~}" (remove-if #'pred bindings))))
  (loop for (var val) in bindings
        collect var into vars
        collect val into vals
        finally (return `(dynamic-variable-progv (list ,@vars) (list ,@vals)
                           ,@body))))

(defmacro dset (&rest pairs)
  `(setf ,@(loop for (var val) on pairs by #'cddr
                 collect `(dref ,var)
                 collect val)))

(defmacro dref (variable)
  `(dynamic-variable-value ,variable))

(defun call-with-dynamic-variable-progv (cont vars vals)
  (flet ((thunk ()
           (if vals
               (call-with-dynamic-variable cont (car vars) (car vals))
               (call-with-dynamic-variable cont (car vars)))))
    (if vars
        (call-with-dynamic-variable-progv #'thunk (cdr vars) (cdr vals))
        (funcall cont))))

(defmacro dynamic-variable-progv (vars vals &body body)
  (let ((cont (gensym)))
    `(flet ((,cont () ,@body))
       (call-with-dynamic-variable-progv (function ,cont) ,vars ,vals))))

Synchronized hash tables with weakness

Previously we've used SBCL-specific options to define a synchronized hash table with weak keys. This won't do anymore, because we will need a similar object to implement the thread-local storage for top level values.

trivial-garbage is a portability layer that allows to define hash tables with a specified weakness, but it does not provide an argument that would abstract away synchronization. We will ensure thread-safety with locks instead.

(defclass tls-table ()
  ((table :initform (trivial-garbage:make-weak-hash-table
                     :test #'eq :weakness :key))
   (lock :initform (bt:make-lock))))

(defun make-tls-table ()
  (make-instance 'tls-table))

(defmacro with-tls-table ((var self) &body body)
  (let ((obj (gensym)))
    `(let* ((,obj ,self)
            (,var (slot-value ,obj 'table)))
       (bt:with-lock-held ((slot-value ,obj 'lock)) ,@body))))

First-class dynamic variables

STANDARD-DYNAMIC-VARIABLE

Previously in the default implementation we've represented dynamic variables with a symbol. The new implementation is similar except that the symbol is read from a STANDARD-OBJECT that represents the variable. This also enables us to specialize the function CALL-WITH-DYNAMIC-VARIABLE:

(defclass standard-dynamic-variable (dynamic-variable)
  ((symbol :initform (gensym) :accessor dynamic-variable-bindings)))

(defmethod dynamic-variable-value ((dvar standard-dynamic-variable))
  (symbol-value (dynamic-variable-bindings dvar)))

(defmethod (setf dynamic-variable-value) (value (dvar standard-dynamic-variable))
  (setf (symbol-value (dynamic-variable-bindings dvar)) value))

(defmethod dynamic-variable-bound-p ((dvar standard-dynamic-variable))
  (boundp (dynamic-variable-bindings dvar)))

(defmethod dynamic-variable-makunbound ((dvar standard-dynamic-variable))
  (makunbound (dynamic-variable-bindings dvar)))

(defmethod call-with-dynamic-variable (cont (dvar standard-dynamic-variable)
                                       &optional (val nil val-p))
  (progv (list (dynamic-variable-bindings dvar)) (if val-p (list val) ())
    (funcall cont)))

SURROGATE-DYNAMIC-VARIABLE

The implementation of the SURROGATE-DYNAMIC-VARIABLE is almost the same as previously. The only difference is that we use the previously defined indirection to safely work with hash tables. Also note, that we are not add the feature condition - both classes is always created.

(defvar +fake-unbound+ 'unbound)
(defvar +cell-unbound+ '(no-binding))

(defclass surrogate-dynamic-variable (dynamic-variable)
  ((tls-table
    :initform (make-tls-table)
    :reader dynamic-variable-tls-table)
   (top-value
    :initform +fake-unbound+
    :accessor dynamic-variable-top-value)))

(defmethod dynamic-variable-bindings ((dvar surrogate-dynamic-variable))
  (let ((process (bt:current-thread)))
    (with-tls-table (tls-table (dynamic-variable-tls-table dvar))
      (gethash process tls-table +cell-unbound+))))

(defmethod (setf dynamic-variable-bindings) (value (dvar surrogate-dynamic-variable))
  (let ((process (bt:current-thread)))
    (with-tls-table (tls-table (dynamic-variable-tls-table dvar))
      (setf (gethash process tls-table) value))))

(defun %dynamic-variable-value (dvar)
  (let ((tls-binds (dynamic-variable-bindings dvar)))
    (if (eq tls-binds +cell-unbound+)
        (dynamic-variable-top-value dvar)
        (car tls-binds))))

(defmethod dynamic-variable-value ((dvar surrogate-dynamic-variable))
  (let ((tls-value (%dynamic-variable-value dvar)))
    (when (eq tls-value +fake-unbound+)
      (error 'unbound-variable :name "(unnamed)"))
    tls-value))

(defmethod (setf dynamic-variable-value) (value (dvar surrogate-dynamic-variable))
  (let ((tls-binds (dynamic-variable-bindings dvar)))
    (if (eq tls-binds +cell-unbound+)
        (setf (dynamic-variable-top-value dvar) value)
        (setf (car tls-binds) value))))

(defmethod dynamic-variable-bound-p ((dvar surrogate-dynamic-variable))
  (not (eq +fake-unbound+ (%dynamic-variable-value dvar))))

(defmethod dynamic-variable-makunbound ((dvar surrogate-dynamic-variable))
  (setf (dynamic-variable-value dvar) +fake-unbound+))


;;; Apparently CCL likes to drop^Helide some writes and that corrupts bindings
;;; table. Let's ensure that the value is volatile.
#+ccl (defvar *ccl-ensure-volatile* nil)
(defmethod call-with-dynamic-variable (cont (dvar surrogate-dynamic-variable)
                                       &optional (val +fake-unbound+))
  (push val (dynamic-variable-bindings dvar))
  (let (#+ccl (*ccl-ensure-volatile* (dynamic-variable-bindings dvar)))
    (unwind-protect (funcall cont)
      (pop (dynamic-variable-bindings dvar)))))

Thread-local variables

We've refactored the previous code to be extensible. Now we can use metaobjects from the previous post without change. We can also test both implementations in the same process interchangeably by customizing the default class parameter.

It is the time now to have some fun and extend dynamic variables into variables with top value not shared between different threads. This will enable ultimate thread safety. With our new protocol the implementation is trivial!

The protocol

First we will define the protocol class. THREAD-LOCAL-VARIABLE is a variant of a DYNAMIC-VARIABLE with thread-local top values.

We specify initialization arguments :INITVAL and :INITFUN that will be used to assign the top value of a binding. The difference is that INITVAL specifies a single value, while INITFUN can produce an unique object on each invocation. INITARG takes a precedence over INTIFUN, and if neither is supplied, then a variable is unbound.

We include the constructor that builds on MAKE-DYNAMIC-VARIABLE-USING-KEY, and macros corresponding to DEFVAR and DEFPARAMETER. Note that they expand to :INITFUN - this assures that the initialization form is re-evaluated for each new thread where the variable is used.

(defclass thread-local-variable (dynamic-variable) ())

(defmethod initialize-instance :after
    ((self thread-local-variable) &key initfun initval)
  (declare (ignore self initfun initval)))

(defparameter *default-thread-local-variable-class*
  #-fake-progv-kludge 'standard-thread-local-variable
  #+fake-progv-kludge 'surrogate-thread-local-variable)

(defun make-thread-local-variable (&rest initargs)
  (apply #'make-dynamic-variable-using-key
         *default-thread-local-variable-class* initargs))

(defmacro create-tls-variable (&optional (form nil fp) &rest initargs)
  `(make-thread-local-variable 
    ,@(when fp `(:initfun (lambda () ,form)))
    ,@initargs))

(defmacro define-tls-variable (name &rest initform-and-initargs)
  `(defvar ,name (create-tls-variable ,@initform-and-initargs)))

(defmacro define-tls-parameter (name &rest initform-and-initargs)
  `(defparameter ,name (create-tls-variable ,@initform-and-initargs)))

Perhaps it is a good time to introduce a new convention for tls variable names. I think that surrounding names with the minus sign is a nice idea, because it signifies, that it is something less than a global value. For example:

DYNAMIC-VARS> (define-tls-variable -context- 
                  (progn
                    (print "Initializing context!")
                    (list :context)))
-CONTEXT-
DYNAMIC-VARS> -context-
#<a EU.TURTLEWARE.DYNAMIC-VARS::STANDARD-THREAD-LOCAL-VARIABLE 0x7f7636c08640>
DYNAMIC-VARS> (dref -context-)

"Initializing context!" 
(:CONTEXT)
DYNAMIC-VARS> (dref -context-)
(:CONTEXT)
DYNAMIC-VARS> (dset -context- :the-new-value)

:THE-NEW-VALUE
DYNAMIC-VARS> (dref -context-)
:THE-NEW-VALUE
DYNAMIC-VARS> (bt:make-thread
               (lambda ()
                 (print "Let's read it!")
                 (print (dref -context-))))
#<process "Anonymous thread" 0x7f7637a26cc0>

"Let's read it!" 
"Initializing context!" 
(:CONTEXT) 
DYNAMIC-VARS> (dref -context-)
:THE-NEW-VALUE

The implementation

You might have noticed the inconspicuous operator DYNAMIC-VARIABLE-BINDINGS that is part of the protocol. It returns an opaque object that represents values of the dynamic variable in the current context:

  • for STANDARD-DYNAMIC-VARIABLE it is a symbol
  • for SURROGATE-DYNAMIC-VARIABLE it is a thread-local list of bindings

In any case all other operators first take this object and then use it to read, write or bind the value. The gist of the tls variables implementation is to always return an object that is local to the thread. To store these objects we will use the tls-table we've defined earlier.

(defclass thread-local-variable-mixin (dynamic-variable)
  ((tls-table
    :initform (make-tls-table)
    :reader dynamic-variable-tls-table)
   (tls-initfun
    :initarg :initfun
    :initform nil
    :accessor thread-local-variable-initfun)
   (tls-initval
    :initarg :initval
    :initform +fake-unbound+
    :accessor thread-local-variable-initval)))

For the class STANDARD-THREAD-LOCAL-VARIABLE we will simply return a different symbol depending on the thread:

(defclass standard-thread-local-variable (thread-local-variable-mixin
                                         thread-local-variable
                                         standard-dynamic-variable)
  ())

(defmethod dynamic-variable-bindings ((tvar standard-thread-local-variable))
  (flet ((make-new-tls-bindings ()
           (let ((symbol (gensym))
                 (initval (thread-local-variable-initval tvar))
                 (initfun (thread-local-variable-initfun tvar)))
             (cond
               ((not (eq +fake-unbound+ initval))
                (setf (symbol-value symbol) initval))
               ((not (null initfun))
                (setf (symbol-value symbol) (funcall initfun))))
             symbol)))
    (let ((key (bt:current-thread)))
      (with-tls-table (tls-table (dynamic-variable-tls-table tvar))
        (or (gethash key tls-table)
            (setf (gethash key tls-table)
                  (make-new-tls-bindings)))))))

And for the class SURROGATE-THREAD-LOCAL-VARIABLE the only difference from the SURROGATE-DYNAMIC-VARIABLE implementation is to cons a new list as the initial value (even when it is unbound) to ensure it is not EQ to +CELL-UNBOUND+.

(defclass surrogate-thread-local-variable (thread-local-variable-mixin
                                          thread-local-variable
                                          surrogate-dynamic-variable)
  ())

(defmethod dynamic-variable-bindings ((tvar surrogate-thread-local-variable))
  (flet ((make-new-tls-bindings ()
           (let ((initval (thread-local-variable-initval tvar))
                 (initfun (thread-local-variable-initfun tvar)))
             (cond
               ((not (eq +fake-unbound+ initval))
                (list initval))
               ((not (null initfun))
                (list (funcall initfun)))
               (t
                (list +fake-unbound+))))))
    (let ((key (bt:current-thread)))
      (with-tls-table (tls-table (dynamic-variable-tls-table tvar))
        (or (gethash key tls-table)
            (setf (gethash key tls-table)
                  (make-new-tls-bindings)))))))

That's all, now we have two implementations of thread-local variables. Ramifications are similar as with "ordinary" dynamic variables - the standard implementation is not advised for SBCL, because it will crash in LDB.

Thread-local slots

First we are going to allow to defined dynamic variable types with an abbreviated names. This will enable us to specify in the slot definition that type, for example (MY-SLOT :DYNAMIC :TLS :INITFORM 34)

;;; Examples how to add shorthand type names for the dynamic slots:

(defmethod make-dynamic-variable-using-key ((key (eql :tls)) &rest initargs)
  (apply #'make-dynamic-variable-using-key
         *default-thread-local-variable-class* initargs))

(defmethod make-dynamic-variable-using-key ((key (eql :normal-tls)) &rest initargs)
  (apply #'make-dynamic-variable-using-key
         'standard-thread-local-variable initargs))

(defmethod make-dynamic-variable-using-key ((key (eql :kludge-tls)) &rest initargs)
  (apply #'make-dynamic-variable-using-key
         'surrogate-thread-local-variable initargs))

;;; For *DEFAULT-DYNAMIC-VARIABLE* specify :DYNAMIC T.

(defmethod make-dynamic-variable-using-key ((key (eql :normal-dyn)) &rest initargs)
  (apply #'make-dynamic-variable-using-key
         'standard-dynamic-variable initargs))

(defmethod make-dynamic-variable-using-key ((key (eql :kludge-dyn)) &rest initargs)
  (apply #'make-dynamic-variable-using-key
         'surrogate-dynamic-variable initargs))

In order to do that, we need to remember he value of the argument :DYNAMIC. We will read it with DYNAMIC-VARIABLE-TYPE and that value will be available in both direct and the effective slot:

;;; Slot definitions
;;; There is a considerable boilerplate involving customizing slots.
;;;
;;; - direct slot definition: local to a single defclass form
;;;
;;; - effective slot definition: combination of all direct slots with the same
;;;   name in the class and its superclasses
;;;
(defclass dynamic-direct-slot (mop:standard-direct-slot-definition)
  ((dynamic :initform nil :initarg :dynamic :reader dynamic-variable-type)))

;;; The metaobject protocol did not specify an elegant way to communicate
;;; between the direct slot definition and the effective slot definition.
;;; Luckily we have dynamic bindings! :-)
(defvar *kludge/mop-deficiency/dynamic-variable-type* nil)

;;; DYNAMIC-EFFECTIVE-SLOT is implemented to return as slot-value values of the
;;; dynamic variable that is stored with the instance.
;;;
;;; It would be nice if we could specify :ALLOCATION :DYNAMIC for the slot, but
;;; then STANDARD-INSTANCE-ACCESS would go belly up. We could make a clever
;;; workaround, but who cares?
(defclass dynamic-effective-slot (mop:standard-effective-slot-definition)
  ((dynamic :initform *kludge/mop-deficiency/dynamic-variable-type*
            :reader dynamic-variable-type)))

Moreover we specialize the function MAKE-DYNAMIC-VARIABLE-USING-KEY to the effective slot class. The initargs in this method are meant for the instance. When the dynamic variable is created, we check whether it is a thread-local variable and initialize its INITVAL and INITFUN to values derived from INITARGS, MOP:SLOT-DEFINITION-INITARGS and MOP:SLOT-DEFINITION-INITFUN:

(defmethod make-dynamic-variable-using-key
    ((key dynamic-effective-slot) &rest initargs)
  (let* ((dvar-type (dynamic-variable-type key))
         (dvar (make-dynamic-variable-using-key dvar-type)))
    (when (typep dvar 'thread-local-variable)
      (loop with slot-initargs = (mop:slot-definition-initargs key)
            for (key val) on initargs by #'cddr
            when (member key slot-initargs) do
              (setf (thread-local-variable-initval dvar) val))
      (setf (thread-local-variable-initfun dvar)
            (mop:slot-definition-initfunction key)))
    dvar))

The rest of the implementation of DYNAMIC-EFFECTIVE-SLOT is unchanged:

(defmethod mop:slot-value-using-class
    ((class standard-class)
     object
     (slotd dynamic-effective-slot))
  (dref (slot-dvar object slotd)))

(defmethod (setf mop:slot-value-using-class)
    (new-value
     (class standard-class)
     object
     (slotd dynamic-effective-slot))
  (dset (slot-dvar object slotd) new-value))

(defmethod mop:slot-boundp-using-class
  ((class standard-class)
   object
   (slotd dynamic-effective-slot))
  (dynamic-variable-bound-p (slot-dvar object slotd)))

(defmethod mop:slot-makunbound-using-class
  ((class standard-class)
   object
   (slotd dynamic-effective-slot))
  (dynamic-variable-makunbound (slot-dvar object slotd)))

The implementation of CLASS-WITH-DYNAMIC-SLOTS is also very similar. The first difference in that ALLOCATE-INSTANCE calls MAKE-DYNAMIC-VARIABLE-USING-KEY instead of MAKE-DYNAMIC-VARIABLE and supplies the effective slot definition as the key, and the instance initargs as the remaining arguments. Note that at this point initargs are already validated by MAKE-INSTANCE. The second difference is that MOP:COMPUTE-EFFECTIVE-SLOT-DEFINITION binds the flag *KLUDGE/MOP-DEFICIENCY/DYNAMIC-VARIABLE-TYPE* to DYNAMIC-VARIABLE-TYPE.

;;; This is a metaclass that allows defining dynamic slots that are bound with
;;; the operator SLOT-DLET, and, depending on the type, may have thread-local
;;; top value.
;;;
;;; The metaclass CLASS-WITH-DYNAMIC-SLOTS specifies alternative effective slot
;;; definitions for slots with an initarg :dynamic.
(defclass class-with-dynamic-slots (standard-class) ())

;;; Class with dynamic slots may be subclasses of the standard class.
(defmethod mop:validate-superclass ((class class-with-dynamic-slots)
                                    (super standard-class))
  t)

;;; When allocating the instance we initialize all slots to a fresh symbol that
;;; represents the dynamic variable.
(defmethod allocate-instance ((class class-with-dynamic-slots) &rest initargs)
  (let ((object (call-next-method)))
    (loop for slotd in (mop:class-slots class)
          when (typep slotd 'dynamic-effective-slot) do
            (setf (mop:standard-instance-access
                   object
                   (mop:slot-definition-location slotd))
                  (apply #'make-dynamic-variable-using-key slotd initargs)))
    object))

;;; To improve potential composability of CLASS-WITH-DYNAMIC-SLOTS with other
;;; metaclasses we treat specially only slots that has :DYNAMIC in initargs,
;;; otherwise we call the next method.
(defmethod mop:direct-slot-definition-class
    ((class class-with-dynamic-slots) &rest initargs)
  (loop for (key) on initargs by #'cddr
        when (eq key :dynamic)
          do (return-from mop:direct-slot-definition-class
               (find-class 'dynamic-direct-slot)))
  (call-next-method))

(defmethod mop:compute-effective-slot-definition
    ((class class-with-dynamic-slots)
     name
     direct-slotds)
  (declare (ignore name))
  (let ((latest-slotd (first direct-slotds)))
    (if (typep latest-slotd 'dynamic-direct-slot)
        (let ((*kludge/mop-deficiency/dynamic-variable-type*
                (dynamic-variable-type latest-slotd)))
          (call-next-method))
        (call-next-method))))

(defmethod mop:effective-slot-definition-class
    ((class class-with-dynamic-slots) &rest initargs)
  (declare (ignore initargs))
  (if *kludge/mop-deficiency/dynamic-variable-type*
      (find-class 'dynamic-effective-slot)
      (call-next-method)))

Finally the implementation of SLOT-DLET does not change:

;;; Accessing and binding symbols behind the slot. We don't use SLOT-VALUE,
;;; because it will return the _value_ of the dynamic variable, and not the
;;; variable itself.
(defun slot-dvar (object slotd)
  (check-type slotd dynamic-effective-slot)
  (mop:standard-instance-access
   object (mop:slot-definition-location slotd)))

(defun slot-dvar* (object slot-name)
  (let* ((class (class-of object))
         (slotd (find slot-name (mop:class-slots class)
                      :key #'mop:slot-definition-name)))
    (slot-dvar object slotd)))

(defmacro slot-dlet (bindings &body body)
  `(dlet ,(loop for ((object slot-name) val) in bindings
                collect `((slot-dvar* ,object ,slot-name) ,val))
     ,@body))

Finally we can define a class with slots that do not share the top value:

DYNAMIC-VARS> (defclass c1 ()
                  ((slot1 :initarg :slot1 :dynamic nil :accessor slot1)
                   (slot2 :initarg :slot2 :dynamic t   :accessor slot2)
                   (slot3 :initarg :slot3 :dynamic :tls :accessor slot3))
                  (:metaclass class-with-dynamic-slots))
#<The EU.TURTLEWARE.DYNAMIC-VARS::CLASS-WITH-DYNAMIC-SLOTS EU.TURTLEWARE.DYNAMIC-VARS::C1>
DYNAMIC-VARS> (with-slots (slot1 slot2 slot3) *object*
                (setf slot1 :x slot2 :y slot3 :z)
                (list slot1 slot2 slot3))
(:X :Y :Z)
DYNAMIC-VARS> (bt:make-thread
               (lambda ()
                 (with-slots (slot1 slot2 slot3) *object*
                   (setf slot1 :i slot2 :j slot3 :k)
                   (print (list slot1 slot2 slot3)))))

#<process "Anonymous thread" 0x7f76424c0240>

(:I :J :K) 
DYNAMIC-VARS> (with-slots (slot1 slot2 slot3) *object*
                (list slot1 slot2 slot3))
(:I :J :Z)

What can we use it for?

Now that we know how to define thread-local variables, we are left with a question what can we use it for. Consider having a line-buffering stream. One possible implementation could be sketched as:

(defclass line-buffering-stream (fancy-stream)
  ((current-line :initform (make-adjustable-string)
                 :accessor current-line)
   (current-ink :initform +black+
                :accessor current-ink)))

(defmethod stream-write-char ((stream line-buffering-stream) char)
  (if (char= char #\newline)
      (terpri stream)
      (vector-push-extend char (current-line stream))))

(defmethod stream-terpri ((stream line-buffering-stream))
  (%put-line-on-screen (current-line stream) (current-ink stream))
  (setf (fill-pointer (current-line stream)) 0))

If this stream is shared between multiple threads, then even if individual operations and %PUT-LINE-ON-SCREEN are thread-safe , we have a problem. For example FORMAT writes are not usually atomic and individual lines are easily corrupted. If we use custom colors, these are also a subject of race conditions. The solution is as easy as making both slots thread-local. In that case the buffered line is private to each thread and it is put on the screen atomically:

(defclass line-buffering-stream (fancy-stream)
  ((current-line
    :initform (make-adjustable-string)
    :accessor current-line
    :dynamic :tls)
   (current-ink
    :initform +black+
    :accessor current-ink
    :dynamic :tls))
  (:metaclass class-with-dynamic-slots))

Technique is not limited to streams. It may benefit thread-safe drawing, request processing, resource management and more. By subclassing DYNAMIC-VARIABLE we could create also variables that are local to different objects than processes.

I hope that you've enjoyed reading this post as much as I had writing it. If you are interested in a full standalone implementation, with tests and system definitions, you may get it here. Cheers!

Joe MarshallDon't Try to Program in Lisp

· 60 days ago

A comment on my previous post said,

The most difficult thing when coming to a different language is to leave the other language behind. The kind of friction experienced here is common when transliterating ideas from one language to another. Go (in this case) is telling you it just doesn't like to work like this.
Try writing simple Go, instead of reaching for Lisp idioms. Then find the ways that work for Go to express the concepts you find.

That's not at all how I approach programming.

A friend of mine once paid me a high compliment. He said, “Even your C code looks like Lisp.”

When I write code, I don't think in terms of the language I'm using, I think in terms of the problem I'm solving. I'm a mostly functional programmer, so I like to think in terms of functions and abstractions. I mostly reason about my code informally, but I draw upon the formal framework of Lambda Calculus. Lambda Calculus is a simple, but powerful (and universal) model of computation.

Programming therefore becomes a matter of expressing the solution to a problem with the syntax and idioms of the language I'm using. Lisp was inspired by Lambda Calculus, so there is little friction in expressing computations in Lisp. Lisp is extensible and customizable, so I can add new syntax and idioms as desired.

Other languages are less accommodating. Some computations are not easily expressable in the syntax of the language, or the semantics of the language are quirky and inconsistent. Essentially, every general purpose fourth generation programming language can be viewed as a poorly-specified, half-assed, incomplete, bug-ridden implementation of half of Common Lisp. The friction comes from working around the limitations of the language.

TurtleWareDynamic Vars - The Empire Strikes Back

· 65 days ago

Table of Contents

  1. Thread Local storage exhausted
  2. The layer of indirection
  3. I can fix her
  4. Let's write some tests!
  5. Summary

Thread Local storage exhausted

In the last post I've described a technique to use dynamic variables by value instead of the name by utilizing the operator PROGV. Apparently it works fine on all Common Lisp implementations I've tried except from SBCL, where the number of thread local variables is by default limited to something below 4000. To add salt to the injury, these variables are not garbage collected.

Try the following code to crash into LDB:

(defun foo ()
  (loop for i from 0 below 4096 do
    (when (zerop (mod i 100))
      (print i))
    (progv (list (gensym)) (list 42)
      (values))))
(foo)

This renders our new technique not very practical given SBCL popularity. We need to either abandon the idea or come up with a workaround.

The layer of indirection

Luckily for us we've already introduced a layer of indirection. Operators to access dynamic variables are called DLET, DSET and DREF. This means, that it is enough to provide a kludge implementation for SBCL with minimal changes to the remaining code.

The old code works the same as previously except that instead of SYMBOL-VALUE we use the accessor DYNAMIC-VARIABLE-VALUE, and the old call to PROGV is now DYNAMIC-VARIABLE-PROGV. Moreover DYNAMIC-EFFECTIVE-SLOT used functions BOUNDP and MAKUNBOUND, so we replace these with DYNAMIC-VARIABLE-BOUND-P and DYNAMIC-VARIABLE-MAKUNBOUND. To abstract away things further we also introduce the constructor MAKE-DYNAMIC-VARIABLE

(defpackage "EU.TURTLEWARE.BLOG/DLET"
  (:local-nicknames ("MOP" #+closer-mop "C2MOP"
                           #+(and (not closer-mop) ecl) "MOP"
                           #+(and (not closer-mop) ccl) "CCL"
                           #+(and (not closer-mop) sbcl) "SB-MOP"))
  (:use "CL"))
(in-package "EU.TURTLEWARE.BLOG/DLET")

(eval-when (:compile-toplevel :execute :load-toplevel)
  (unless (member :bordeaux-threads *features*)
    (error "Please load BORDEAUX-THREADS."))
  (when (member :sbcl *features*)
    (unless (member :fake-progv-kludge *features*)
      (format t "~&;; Using FAKE-PROGV-KLUDGE for SBCL.~%")
      (push :fake-progv-kludge *features*))))

(defmacro dlet (bindings &body body)
  (flet ((pred (binding)
           (and (listp binding) (= 2 (length binding)))))
    (unless (every #'pred bindings)
      (error "DLET: bindings must be lists of two values.~%~
                Invalid bindings:~%~{ ~s~%~}" (remove-if #'pred bindings))))
  (loop for (var val) in bindings
        collect var into vars
        collect val into vals
        finally (return `(dynamic-variable-progv (list ,@vars) (list ,@vals)
                           ,@body))))

(defmacro dset (&rest pairs)
  `(setf ,@(loop for (var val) on pairs by #'cddr
                 collect `(dref ,var)
                 collect val)))

(defmacro dref (variable)
  `(dynamic-variable-value ,variable))

;;; ...

(defmethod mop:slot-boundp-using-class
    ((class standard-class)
     object
     (slotd dynamic-effective-slot))
  (dynamic-variable-bound-p (slot-dvar object slotd)))

(defmethod mop:slot-makunbound-using-class
    ((class standard-class)
     object
     (slotd dynamic-effective-slot))
  (dynamic-variable-makunbound (slot-dvar object slotd)))

With these in place we can change the portable implementation to conform.

#-fake-progv-kludge
(progn
  (defun make-dynamic-variable ()
    (gensym))

  (defun dynamic-variable-value (variable)
    (symbol-value variable))

  (defun (setf dynamic-variable-value) (value variable)
    (setf (symbol-value variable) value))

  (defun dynamic-variable-bound-p (variable)
    (boundp variable))

  (defun dynamic-variable-makunbound (variable)
    (makunbound variable))

  (defmacro dynamic-variable-progv (vars vals &body body)
    `(progv ,vars ,vals ,@body)))

I can fix her

The implementation for SBCL will mediate access to the dynamic variable value with a synchronized hash table with weak keys. The current process is the key of the hash table and the list of bindings is the value of the hash table. For compatibility between implementations the top level value of the symbol will be shared.

The variable +FAKE-UNBOUND+ is the marker that signifies, that the variable has no value. When the list of bindings is EQ to +CELL-UNBOUND+, then it means that we should use the global value. We add new bindings by pushing to it.

#+fake-progv-kludge
(progn
  (defvar +fake-unbound+ 'unbound)
  (defvar +cell-unbound+ '(no-binding))

  (defclass dynamic-variable ()
    ((tls-table
      :initform (make-hash-table :synchronized t :weakness :key)
      :reader dynamic-variable-tls-table)
     (top-value
      :initform +fake-unbound+
      :accessor dynamic-variable-top-value)))

  (defun make-dynamic-variable ()
    (make-instance 'dynamic-variable))

  (defun dynamic-variable-bindings (dvar)
    (let ((process (bt:current-thread))
          (tls-table (dynamic-variable-tls-table dvar)))
      (gethash process tls-table +cell-unbound+)))

  (defun (setf dynamic-variable-bindings) (value dvar)
    (let ((process (bt:current-thread))
          (tls-table (dynamic-variable-tls-table dvar)))
      (setf (gethash process tls-table +cell-unbound+) value))))

We define two readers for the variable value - one that simply reads the value, and the other that signals an error if the variable is unbound. Writer for its value either replaces the current binding, or if the value cell is unbound, then we modify the top-level symbol value. We use the value +FAKE-UNBOUND+ to check whether the variable is bound and to make it unbound.

#+fake-progv-kludge
(progn
  (defun %dynamic-variable-value (dvar)
    (let ((tls-binds (dynamic-variable-bindings dvar)))
      (if (eq tls-binds +cell-unbound+)
          (dynamic-variable-top-value dvar)
          (car tls-binds))))

  (defun dynamic-variable-value (dvar)
    (let ((tls-value (%dynamic-variable-value dvar)))
      (when (eq tls-value +fake-unbound+)
        (error 'unbound-variable :name "(unnamed)"))
      tls-value))

  (defun (setf dynamic-variable-value) (value dvar)
    (let ((tls-binds (dynamic-variable-bindings dvar)))
      (if (eq tls-binds +cell-unbound+)
          (setf (dynamic-variable-top-value dvar) value)
          (setf (car tls-binds) value))))

  (defun dynamic-variable-bound-p (dvar)
    (not (eq +fake-unbound+ (%dynamic-variable-value dvar))))

  (defun dynamic-variable-makunbound (dvar)
    (setf (dynamic-variable-value dvar) +fake-unbound+)))

Finally we define the operator to dynamically bind variables that behaves similar to PROGV. Note that we PUSH and POP from the thread-local hash table DYNAMIC-VARIABLE-BINDINGS, so no synchronization is necessary.

#+fake-progv-kludge
(defmacro dynamic-variable-progv (vars vals &body body)
  (let ((svars (gensym))
        (svals (gensym))
        (var (gensym))
        (val (gensym)))
    `(let ((,svars ,vars))
       (loop for ,svals = ,vals then (rest ,svals)
             for ,var in ,svars
             for ,val = (if ,svals (car ,svals) +fake-unbound+)
             do (push ,val (dynamic-variable-bindings ,var)))
       (unwind-protect (progn ,@body)
         (loop for ,var in ,svars
               do (pop (dynamic-variable-bindings ,var)))))))

Let's write some tests!

But of course, we are going to also write a test framework. It's short, I promise. As a bonus point the API is compatibile with fiveam, so it is possible to drop tests as is in the appropriate test suite.

(defvar *all-tests* '())

(defun run-tests ()
  (dolist (test (reverse *all-tests*))
    (format *debug-io* "Test ~a... " test)
    (handler-case (funcall test)
      (serious-condition (c)
        (format *debug-io* "Failed: ~a~%" c))
      (:no-error (&rest args)
        (declare (ignore args))
        (format *debug-io* "Passed.~%")))))

(defmacro test (name &body body)
  `(progn
     (pushnew ',name *all-tests*)
     (defun ,name () ,@body)))

(defmacro is (form)
  `(assert ,form))

(defmacro pass ())

(defmacro signals (condition form)
  `(is (block nil
         (handler-case ,form
           (,condition () (return t)))
         nil)))

(defmacro finishes (form)
  `(is (handler-case ,form
         (serious-condition (c)
           (declare (ignore c))
           nil)
         (:no-error (&rest args)
           (declare (ignore args))
           t))))

Now let's get to tests. First we'll test our metaclass:

(defclass dynamic-let.test-class ()
  ((slot1 :initarg :slot1 :dynamic nil :accessor slot1)
   (slot2 :initarg :slot2 :dynamic t   :accessor slot2)
   (slot3 :initarg :slot3              :accessor slot3))
  (:metaclass class-with-dynamic-slots))

(defparameter *dynamic-let.test-instance-1*
  (make-instance 'dynamic-let.test-class
                 :slot1 :a :slot2 :b :slot3 :c))

(defparameter *dynamic-let.test-instance-2*
  (make-instance 'dynamic-let.test-class
                 :slot1 :x :slot2 :y :slot3 :z))

(test dynamic-let.1
  (let ((o1 *dynamic-let.test-instance-1*)
        (o2 *dynamic-let.test-instance-2*))
    (with-slots (slot1 slot2 slot3) o1
      (is (eq :a slot1))
      (is (eq :b slot2))
      (is (eq :c slot3)))
    (with-slots (slot1 slot2 slot3) o2
      (is (eq :x slot1))
      (is (eq :y slot2))
      (is (eq :z slot3)))))

(test dynamic-let.2
  (let ((o1 *dynamic-let.test-instance-1*)
        (o2 *dynamic-let.test-instance-2*))
    (signals error (slot-dlet (((o1 'slot1) 1)) nil))
    (slot-dlet (((o1 'slot2) :k))
      (is (eq :k (slot-value o1 'slot2)))
      (is (eq :y (slot-value o2 'slot2))))))

(test dynamic-let.3
  (let ((o1 *dynamic-let.test-instance-1*)
        (exit nil)
        (fail nil))
    (flet ((make-runner (values)
             (lambda ()
               (slot-dlet (((o1 'slot2) :start))
                 (let ((value (slot2 o1)))
                   (unless (eq value :start)
                     (setf fail value)))
                 (loop until (eq exit t) do
                   (setf (slot2 o1) (elt values (random (length values))))
                   (let ((value (slot2 o1)))
                     (unless (member value values)
                       (setf fail value)
                       (setf exit t))))))))
      (let ((r1 (bt:make-thread (make-runner '(:k1 :k2))))
            (r2 (bt:make-thread (make-runner '(:k3 :k4))))
            (r3 (bt:make-thread (make-runner '(:k5 :k6)))))
        (sleep .1)
        (setf exit t)
        (map nil #'bt:join-thread (list r1 r2 r3))
        (is (eq (slot2 o1) :b))
        (is (null fail))))))

Then let's test the dynamic variable itself:

(test dynamic-let.4
  "Test basic dvar operators."
  (let ((dvar (make-dynamic-variable)))
    (is (eql 42 (dset dvar 42)))
    (is (eql 42 (dref dvar)))
    (ignore-errors
     (dlet ((dvar :x))
       (is (eql :x (dref dvar)))
       (error "foo")))
    (is (eql 42 (dref dvar)))))

(test dynamic-let.5
  "Test bound-p operator."
  (let ((dvar (make-dynamic-variable)))
    (is (not (dynamic-variable-bound-p dvar)))
    (dset dvar 15)
    (is (dynamic-variable-bound-p dvar))
    (dynamic-variable-makunbound dvar)
    (is (not (dynamic-variable-bound-p dvar)))))

(test dynamic-let.6
  "Test makunbound operator."
  (let ((dvar (make-dynamic-variable)))
    (dset dvar t)
    (is (dynamic-variable-bound-p dvar))
    (finishes (dynamic-variable-makunbound dvar))
    (is (not (dynamic-variable-bound-p dvar)))))

(test dynamic-let.7
  "Test locally bound-p operator."
  (let ((dvar (make-dynamic-variable)))
    (is (not (dynamic-variable-bound-p dvar)))
    (dlet ((dvar 15))
      (is (dynamic-variable-bound-p dvar)))
    (is (not (dynamic-variable-bound-p dvar)))))

(test dynamic-let.8
  "Test locally unbound-p operator."
  (let ((dvar (make-dynamic-variable)))
    (dset dvar t)
    (is (dynamic-variable-bound-p dvar))
    (dlet ((dvar nil))
      (is (dynamic-variable-bound-p dvar))
      (finishes (dynamic-variable-makunbound dvar))
      (is (not (dynamic-variable-bound-p dvar))))
    (is (dynamic-variable-bound-p dvar))))

(test dynamic-let.9
  "Stress test the implementation (see :FAKE-PROGV-KLUDGE)."
  (finishes                              ; at the same time
    (let ((dvars (loop repeat 4096 collect (make-dynamic-variable))))
      ;; ensure tls variable
      (loop for v in dvars do
        (dlet ((v 1))))
      (loop for i from 0 below 4096
            for r = (random 4096)
            for v1 in dvars
            for v2 = (elt dvars r) do
              (when (zerop (mod i 64))
                (pass))
              (dlet ((v1 42)
                     (v2 43))
                (values))))))

(test dynamic-let.0
  "Stress test the implementation (see :FAKE-PROGV-KLUDGE)."
  (finishes                             ; can be gc-ed
    (loop for i from 0 below 4096 do
      (when (zerop (mod i 64))
        (pass))
      (dlet (((make-dynamic-variable) 42))
        (values)))))

All that is left is to test both dynamic variable implementations:

BLOG/DLET> (lisp-implementation-type)
"ECL"
BLOG/DLET> (run-tests)
Test DYNAMIC-LET.1... Passed.
Test DYNAMIC-LET.2... Passed.
Test DYNAMIC-LET.3... Passed.
Test DYNAMIC-LET.4... Passed.
Test DYNAMIC-LET.5... Passed.
Test DYNAMIC-LET.6... Passed.
Test DYNAMIC-LET.7... Passed.
Test DYNAMIC-LET.8... Passed.
Test DYNAMIC-LET.9... Passed.
Test DYNAMIC-LET.0... Passed.
NIL

And with the kludge:

BLOG/DLET> (lisp-implementation-type)
"SBCL"
BLOG/DLET> (run-tests)
Test DYNAMIC-LET.1... Passed.
Test DYNAMIC-LET.2... Passed.
Test DYNAMIC-LET.3... Passed.
Test DYNAMIC-LET.4... Passed.
Test DYNAMIC-LET.5... Passed.
Test DYNAMIC-LET.6... Passed.
Test DYNAMIC-LET.7... Passed.
Test DYNAMIC-LET.8... Passed.
Test DYNAMIC-LET.9... Passed.
Test DYNAMIC-LET.0... Passed.
NIL

Summary

In this post we've made our implementation to work on SBCL even when there are more than a few thousand dynamic variables. We've also added a simple test suite that checks the basic behavior.

As it often happens, after achieving some goal we get greedy and achieve more. That's the case here as well. In the next (and the last) post in this series I'll explore the idea of adding truly thread-local variables without a shared global value. This will be useful for lazily creating context on threads that are outside of our control. We'll also generalize the implementation so it is possible to subclass and implement ones own flavor of a dynamic variable.

vindarelRunning my 4th Common Lisp script in production© - you can do it too

· 71 days ago

Last week I finished a new service written in Common Lisp. It now runs in production© every mornings, and it expands the set of services I offer to clients.

It’s the 4th service of this kind that I developed: - they are not big - but have to be done nonetheless, and the quicker the better (they each amount to 1k to 2k lines of Lisp code), - they are not part of a super advanced domain that requires Common Lisp superpowers - I am the one who benefits from CL during development, - I could have written them in Python - and conversely nothing prevented me from writing them in Common Lisp.

So here lies the goal of this post: illustrate that you don’t need to need a super difficult problem to use Common Lisp. This has been asked many times, directly to me or on social media :)

At the same time, I want to encourage you to write a little something about how you use Common Lisp in the real world. Sharing creates emulation. Do it! If you don’t have a blog you can simply write in a new GitHub repository or in a Gist and come share on /r/lisp. We don’t care. Thanks <3

We’ll briefly see what my scripts do, what libraries I use, how I deploy them, what I did along the way.

Needless to say that I dogfooded my CIEL (beta) meta-library and scripting tool for all those projects.

Table of Contents

Scripts n°4 and 2 - shaping and sending data - when you can write Lisp on the side

My latest script needs to read data from a DB, format what’s necessary according to specifications, and send the result by SFTP.

In this case I read a DB that I own, created by a software that I develop and host. So I could have developed this script in the software itself, right? I could have, but I would have been tied to the main project’s versioning scheme, quirks, and deployment. I rather had to write this script on the side. And since it can be done on the side, it can be done in Common Lisp.

I have to extract products and their data (price, VAT...), aggregate the numbers for each day, write this to a file, according to a specification.

Extract of my specification document.

To read the DB, I used cl-dbi. I didn’t format the SQL with SxQL this time like in my web apps (where I use the Mito light ORM), but I wrote SQL directly. I’m spoiled by the Django ORM (which has its idiosyncrasies and shortcomings), so I double checked the different kinds of JOINs and all went well.

I had to group rows by some properties, so it was a great time to use serapeum:assort. I left you an example here: https://dev.to/vindarel/common-lisps-group-by-is-serapeumassort-32ma

Dates have to be handled in different formats. I used local-time of course, and I still greatly appreciate its lispy formatter syntax:

(defun date-yymmddhhnnss (&optional date stream)
  (local-time:format-timestring stream
                                (or date (local-time:now))
                                :format
                                '((:year 4)
                                  (:month 2)
                                  (:day 2)
                                  (:hour 2)
                                  (:min 2)
                                  (:sec 2)
                                  )))

the 2 in (:month 2) is to ensure the month is written with 2 digits.

Once the file is written, I have to send it to a SFTP server, with the client’s codes.

I wrote a profile class to encapsulate the client’s data as well as some functions to read the credentials from either environment variables, the file system, or a lisp variable. I had a top-level profile object for ease of testing, but I made sure that my functions formatting or sending data required a profile parameter.

(defun send-stock (profile &key date) ...)
(defun write-stock (profile filename) ...)

Still nothing surprising, but it’s tempting to only use global parameters for a one-off script. Except the program grows and you pay the mess later.

SFTP

To send the result through SFTP, I had to make a choice. The SFTP command line doesn’t make it possible to give a password as argument (or via an environment variable, etc). So I use lftp (in Debian repositories) that allows to do that. In the end, we format a command like this:

lftp sftp://user:****@host  -e "CD I/; put local-file.name; bye"

You can format the command string and run it with uiop:run-program: no problem, but I took the opportunity to release another utility:

First, you create a profile object. This one-liner reads the credentials from a lispy file:

(defvar profile (make-profile-from-plist (uiop:read-file-form "CREDS.lisp-expr"))

then you define the commands you’ll want to run:

(defvar command (put :cd "I/" :local-filename "data.csv"))
;; #<PUT cd: "I/", filename: "data.csv" {1007153883}>

and finally you call the run method on a profile and a command. Tada.

Deploying

Build a binary the classic way (it’s all on the Cookbook), send it to your server, run it.

(during a testing phase I have deployed “as a script”, from sources, which is a bit quicker to pull changes and try again on the server)

Set up a CRON job.

No Python virtual env to activate in the CRON environment...

Add command line arguments the easy way or with the library of your choice (I like Clingon).

Script n°2 and simple FTP

My script #2 at the time was similar and simpler. I extract the same products but only take their quantities, and I assemble lines like

EXTRACTION STOCK DU 11/04/2008
....978202019116600010000001387
....978270730656200040000000991

For this service, we have to send the file to a simple FTP server.

We have a pure Lisp library for FTP (and not SFTP) which works very well, cl-ftp.

It’s a typical example of an old library that didn’t receive any update in years and so that looks abandoned, that has seldom documentation but whose usage is easy to infer, and that does its job as requested.

For example we do this to send a file:

(ftp:with-ftp-connection (conn :hostname hostname
                                   :username username
                                   :password password
                                   :passive-ftp-p t)
      (ftp:store-file conn local-filename filename))

I left you notes about cl-ftp and my SFTP wrapper here:

Scripts n°3 and n°1 - specialized web apps

A recent web app that I’m testing with a couple clients extends an existing stock management system.

This one also was done in order to avoid a Python monolith. I still needed additions in the Python main software, but this little app can be independent and grow on its own. The app maintains its state and communicates it with a REST API.

Searching books in my little web app.

 

Another web app example used by clients strangers to Lisp.

It gives a web interface to their clients (so my clients’ clients, but not all of them, only the institutional) so that they can:

  • search for products
  • add them in shopping carts
  • validate the cart, which sends the data to the main software and notifies the owner, who will work on them.

The peculiarities of this app are that:

  • there is no user login, we use unique URLs with UUIDs in the form: http://command.client.com/admin-E9DFOO82-R2D2-007/list?id=1
  • I need a bit of file persistence but I didn’t want the rigidity of a database so I am using the clache library. Here also, not a great activity, but it works©. I persist lists and hash-tables. Now that the needs grow and the original scope doesn’t cut it any more, I wonder how long I’ll survive without a DB. Only for its short SQL queries VS lisp code to filter data.

I deploy a self-contained binary: code + html templates in the same binary (+ the implementation, the web server, the debugger...), with Systemd.

I wrote more on how to ship a standalone binary with templates and static assets with Djula templates here:

I can connect to the running app with a Swank server to check and set parameters, which is super helpful and harmless.

It is possible to reload the whole app from within itself and I did it with no hiccups for a couple years, but it isn’t necessary the most reliable, easiest to set up and fastest method. You can do it, but nobody forces you to do this because you are running CL in production. You can use the industry’s boring and best practices too. Common Lisp doesn’t inforce a “big ball of mud” approach. Develop locally, use Git, use a CI, deploy a binary...

Every thing that I learned I documented it along the way in the Cookbook ;)

Another app that I’ll mention but about which I also wrote earlier is my first web app. This one is open-source. It still runs :)

 

In this project I had my friend and colleague contribute five lines of Lisp code to add a theme switcher in the backend that would help him do the frontend. He had never written a line of Lisp before. Of course, he did so by looking at my existing code to learn the existing functions at hand, and he could do it because the project was easy to install and run.

(defun get-template(template &optional (theme *theme*))
  "Loads template from the base templates directory or from the given theme templates directory if it exists."
  (if (and (str:non-blank-string-p theme)
           (probe-file (asdf:system-relative-pathname "abstock" (str:concat "src/templates/themes/" theme "/" template))))
      ;; then
      (str:concat "themes/" theme "/" template)
      ;; else :D
      template))

He had to annotate the if branches :] This passed the code review.

Lasting words

The 5th script/app is already on the way, and the next ones are awaiting that I open their .docx specification files. This one was a bit harder but the Lisp side was done sucessfully with the efficient collaboration of another freelance lisper (Kevin to not name him).

All those tasks (read a DB, transform data...) are very mundane.

They are everywhere. They don’t always need supercharged web framework or integrations.

You have plenty of opportunities to make yourself a favor, and use Common Lisp in the wild. Not counting the super-advanced domains where Lisp excels at ;)


Links

I have done some preliminary Common Lisp exploration prior to this course but had a lot of questions regarding practical use and development workflows. This course was amazing for this! I learned a lot of useful techniques for actually writing the code in Emacs, as well as conversational explanations of concepts that had previously confused me in text-heavy resources. Please keep up the good work and continue with this line of topics, it is well worth the price! [Preston, October of 2024]

TurtleWareDynamic Vars - A New Hope

· 71 days ago

Table of Contents

  1. Dynamic Bindings
  2. The problem
  3. The solution
  4. Dynamic slots
  5. The context
  6. Summary

Dynamic Bindings

Common Lisp has an important language feature called dynamic binding. It is possible to rebind a dynamic variable somewhere on the call stack and downstream functions will see that new value, and when the stack is unwound, the old value is brought back.

While Common Lisp does not specify multi-threading, it seems to be a consensus among various implementations that dynamic bindings are thread-local, allowing for controlling the computing context in a safe way.

Before we start experiments, let's define a package to isolate our namespace:

(defpackage "EU.TURTLEWARE.BLOG/DLET"
  (:local-nicknames ("MOP" #+closer-mop "C2MOP"
                           #+(and (not closer-mop) ecl) "MOP"
                           #+(and (not closer-mop) ccl) "CCL"
                           #+(and (not closer-mop) sbcl) "SB-MOP"))
  (:use "CL"))
(in-package "EU.TURTLEWARE.BLOG/DLET")

Dynamic binding of variables is transparent to the programmer, because the operator LET is used for both lexical and dynamic bindings. For example:

(defvar *dynamic-variable* 42)

(defun test ()
  (let ((*dynamic-variable* 15)
        (lexical-variable 12))
    (lambda ()
      (print (cons *dynamic-variable* lexical-variable)))))

(funcall (test))
;;; (42 . 12)

(let ((*dynamic-variable* 'xx))
  (funcall (test)))
;;; (xx . 12)

Additionally the language specifies a special operator PROGV that gives the programmer a control over the dynamic binding mechanism, by allowing passing the dynamic variable by value instead of its name. Dynamic variables are represented by symbols:

(progv (list '*dynamic-variable*) (list 'zz)
  (funcall (test)))
;;; (zz . 12)

The problem

Nowadays it is common to encapsulate the state in the instance of a class. Sometimes that state is dynamic. It would be nice if we could use dynamic binding to control it. That said slots are not variables, and if there are many objects of the same class with different states, then using dynamic variables defined with DEFVAR is not feasible.

Consider the following classes which we want to be thread-safe:

(defgeneric call-with-ink (cont window ink))

(defclass window-1 ()
  ((ink :initform 'red :accessor ink)))

(defmethod call-with-ink (cont (win window-1) ink)
  (let ((old-ink (ink win)))
    (setf (ink win) ink)
    (unwind-protect (funcall cont)
      (setf (ink win) old-ink))))

(defclass window-2 ()
  ())

(defvar *ink* 'blue)
(defmethod ink ((window window-2)) *ink*)

(defmethod call-with-ink (cont (win window-2) ink)
  (let ((*ink* ink))
    (funcall cont)))

The first example is clearly not thread safe. If we access the WINDOW-1 instance from multiple threads, then they will overwrite a value of the slot INK.

The second example is not good either, because when we have many instances of WINDOW-2 then they share the binding. Nesting CALL-WITH-INK will overwrite the binding of another window.

The solution

The solution is to use PROGV:

(defclass window-3 ()
  ((ink :initform (gensym))))

(defmethod initialize-instance :after ((win window-3) &key)
  (setf (symbol-value (slot-value win 'ink)) 'red))

(defmethod call-with-ink (cont (win window-3) ink)
  (progv (list (slot-value win 'ink)) (list ink)
    (funcall cont)))

This way each instance has its own dynamic variable that may be rebound with a designated operator CALL-WITH-INK. It is thread-safe and private. We may add some syntactic sugar so it is more similar to let:

(defmacro dlet (bindings &body body)
  (loop for (var val) in bindings
        collect var into vars
        collect val into vals
        finally (return `(progv (list ,@vars) (list ,@vals)
                           ,@body))))

(defmacro dset (&rest pairs)
  `(setf ,@(loop for (var val) on pairs by #'cddr
                 collect `(symbol-value ,var)
                 collect val)))

(defmacro dref (variable)
  `(symbol-value ,variable))

Dynamic slots

While meta-classes are not easily composable, it is worth noting that we can mold it better into the language by specifying that slot itself has a dynamic value. This way CLOS aficionados will have a new tool in their arsenal.

The approach we'll take is that a fresh symbol is stored as the value of each instance-allocated slot, and then accessors for the slot value will use these symbols as a dynamic variable. Here are low-level accessors:

;;; Accessing and binding symbols behind the slot. We don't use SLOT-VALUE,
;;; because it will return the _value_ of the dynamic variable, and not the
;;; variable itself.
(defun slot-dvar (object slotd)
  (mop:standard-instance-access
   object (mop:slot-definition-location slotd)))

(defun slot-dvar* (object slot-name)
  (let* ((class (class-of object))
         (slotd (find slot-name (mop:class-slots class)
                      :key #'mop:slot-definition-name)))
    (slot-dvar object slotd)))

(defmacro slot-dlet (bindings &body body)
  `(dlet ,(loop for ((object slot-name) val) in bindings
                 collect `((slot-dvar* ,object ,slot-name) ,val))
     ,@body))

Now we'll define the meta-class. We need that to specialize functions responsible for processing slot definitions and the instance allocation. Notice, that we make use of a kludge to communicate between COMPUTE-EFFECTIVE-SLOT-DEFINITION and EFFECTIVE-SLOT-DEFINITION-CLASS – this is because the latter has no access to the direct slot definitions.

;;; The metaclass CLASS-WITH-DYNAMIC-SLOTS specifies alternative effective slot
;;; definitions for slots with an initarg :dynamic.
(defclass class-with-dynamic-slots (standard-class) ())

;;; Class with dynamic slots may be subclasses of the standard class.
(defmethod mop:validate-superclass ((class class-with-dynamic-slots)
                                    (super standard-class))
  t)

;;; When allocating the instance we initialize all slots to a fresh symbol that
;;; represents the dynamic variable.
(defmethod allocate-instance ((class class-with-dynamic-slots) &rest initargs)
  (declare (ignore initargs))
  (let ((object (call-next-method)))
    (loop for slotd in (mop:class-slots class)
          when (typep slotd 'dynamic-effective-slot) do
            (setf (mop:standard-instance-access
                   object
                   (mop:slot-definition-location slotd))
                  (gensym (string (mop:slot-definition-name slotd)))))
    object))

;;; To improve potential composability of CLASS-WITH-DYNAMIC-SLOTS with other
;;; metaclasses we treat specially only slots that has :DYNAMIC in initargs,
;;; otherwise we call the next method.
(defmethod mop:direct-slot-definition-class
    ((class class-with-dynamic-slots) &rest initargs)
  (loop for (key val) on initargs by #'cddr
        when (eq key :dynamic)
          do (return-from mop:direct-slot-definition-class
               (find-class 'dynamic-direct-slot)))
  (call-next-method))

;;; The metaobject protocol did not specify an elegant way to communicate
;;; between the direct slot definition and the effective slot definition.
;;; Luckily we have dynamic bindings! :-)
(defvar *kludge/mop-deficiency/dynamic-slot-p* nil)
(defmethod mop:compute-effective-slot-definition
    ((class class-with-dynamic-slots)
     name
     direct-slotds)
  (if (typep (first direct-slotds) 'dynamic-direct-slot)
      (let* ((*kludge/mop-deficiency/dynamic-slot-p* t))
        (call-next-method))
      (call-next-method)))

(defmethod mop:effective-slot-definition-class
    ((class class-with-dynamic-slots) &rest initargs)
  (declare (ignore initargs))
  (if *kludge/mop-deficiency/dynamic-slot-p*
      (find-class 'dynamic-effective-slot)
      (call-next-method)))

Finally we define a direct and an effective slot classes, and specialize slot accessors that are invoked by the instance accessors.

;;; There is a considerable boilerplate involving customizing slots.
;;;
;;; - direct slot definition: local to a single defclass form
;;;
;;; - effective slot definition: combination of all direct slots with the same
;;;   name in the class and its superclasses
;;;
(defclass dynamic-direct-slot (mop:standard-direct-slot-definition)
  ((dynamic :initform nil :initarg :dynamic :reader dynamic-slot-p)))

;;; DYNAMIC-EFFECTIVE-SLOT is implemented to return as slot-value values of the
;;; dynamic variable that is stored with the instance.
;;;
;;; It would be nice if we could specify :ALLOCATION :DYNAMIC for the slot, but
;;; then STANDARD-INSTANCE-ACCESS would go belly up. We could make a clever
;;; workaround, but who cares?
(defclass dynamic-effective-slot (mop:standard-effective-slot-definition)
  ())

(defmethod mop:slot-value-using-class
    ((class class-with-dynamic-slots)
     object
     (slotd dynamic-effective-slot))
  (dref (slot-dvar object slotd)))

(defmethod (setf mop:slot-value-using-class)
    (new-value
     (class class-with-dynamic-slots)
     object
     (slotd dynamic-effective-slot))
  (dset (slot-dvar object slotd) new-value))

(defmethod mop:slot-boundp-using-class
  ((class class-with-dynamic-slots)
   object
   (slotd dynamic-effective-slot))
  (boundp (slot-dvar object slotd)))

(defmethod mop:slot-makunbound-using-class
  ((class class-with-dynamic-slots)
   object
   (slotd dynamic-effective-slot))
  (makunbound (slot-dvar object slotd)))

With this, we can finally define a class with slots that have dynamic values. What's more, we may bind them like dynamic variables.

;;; Let there be light.
(defclass window-4 ()
  ((ink :initform 'red :dynamic t :accessor ink)
   (normal :initform 'normal :accessor normal))
  (:metaclass class-with-dynamic-slots))

(let ((object (make-instance 'window-4)))
  (slot-dlet (((object 'ink) 15))
    (print (ink object)))
  (print (ink object)))

ContextL provides a similar solution with dynamic slots, although it provides much more, like layered classes. This example is much more self-contained.

The context

Lately I'm working on the repaint queue for McCLIM. While doing so I've decided to make stream operations thread-safe, so it is possible to draw on the stream and write to it from arbitrary thread asynchronously. The access to the output record history needs to be clearly locked, so that may be solved by the mutex. Graphics state is another story, consider the following functions running from separate threads:

(defun team-red ()
  (with-drawing-options (stream :ink +dark-red+)
    (loop for i from 0 below 50000 do
      (write-string (format nil "XXX: ~5d~%" i) stream))))

(defun team-blue ()
  (with-drawing-options (stream :ink +dark-blue+)
    (loop for i from 0 below 50000 do
      (write-string (format nil "YYY: ~5d~%" i) stream))))

(defun team-pink ()
  (with-drawing-options (stream :ink +deep-pink+)
    (loop for i from 0 below 25000 do
      (case (random 2)
        (0 (draw-rectangle* stream 200 (* i 100) 250 (+ (* i 100) 50)))
        (1 (draw-circle* stream 225 (+ (* i 100) 25) 25))))))

(defun gonow (stream)
  (window-clear stream)
  (time (let ((a (clim-sys:make-process #'team-red))
              (b (clim-sys:make-process #'team-blue))
              (c (clim-sys:make-process #'team-grue)))
          (bt:join-thread a)
          (bt:join-thread b)
          (bt:join-thread c)
          (format stream "done!~%")))  )

Operations like WRITE-STRING and DRAW-RECTANGLE can be implemented by holding a lock over the shared resource without much disruption. The drawing color on the other hand is set outside of the loop, so if we had locked the graphics state with a lock, then these functions would be serialized despite being called from different processes. The solution to this problem is to make graphics context a dynamic slot that is accessed with WITH-DRAWING-OPTIONS.

Summary

I hope that I've convinced you that dynamic variables are cool (I'm sure that majority of readers here are already convinced), and that dynamic slots are even cooler :-). Watch forward to the upcoming McCLIM release!

If you like technical writeups like this, please consider supporting me on Patreon.

Scott L. BursonComparison: FSet vs. Sycamore

· 75 days ago

[BULLETIN: Quicklisp now has the latest version of FSet.]

Sycamore, primarily by Neil Dantam, is a functional collections library that is built around the same weight-balanced binary tree data structure (with leaf vectors) that FSet uses.  While the README on that page comments briefly on the differences between Sycamore and FSet, I don't feel that it does FSet justice.  Here is my analysis.

Dantam claims that his library is 30% to 50% faster than FSet on common operations.  While I haven't done comprehensive micro-benchmarking, a couple of quick tests indicates that this claim is plausible.  A look through the internals of the implementation confirms that it is clean and tight, and I must commend him.  There may be some techniques in here that I could usefully borrow.

Most of the performance difference is necessitated by two design choices that were made differently in the two libraries.  One of these Dantam mentions in his comparison: FSet's use of a single, global ordering relation implemented as a CLOS generic function, vs. Sycamore's more standard choice of requiring a comparison function to be supplied when a collection is created.  The other one he doesn't mention: the fact that FSet supports a notion of equivalent-but-unequal values, which are values that are incomparable — there's no way, or at least no obvious way, to say which is less than the other, and yet we want to treat them as unequal.  The simplest example is the integer 1 and the single-float 1.0, which have equal numerical values (and cl:= returns true on them), but which are nonetheless not eql.  (I have a previous blog post that goes into a lot more detail about equality and comparison.)  Since Sycamore expects the user-supplied comparison function to return an integer that is negative, zero, or positive to indicate the ordering of its arguments, there's no encoding for the equivalent-but-unequal case, nor is there any of the code that would be required to handle that case.

Both of these decisions were driven by my goal for the FSet project.  I didn't just want to provide a functional collections library that could be called occasionally when one had a specific need for such a data structure.  My ambition was much grander: to make functional collections into a reasonable default choice for the vast majority of programming situations.  I wanted FSet users (including, of course, myself) to be able to use functional collections freely, with very little extra effort or thought.  While Lisp by itself reaches a little bit in this direction — lists can certainly be used functionally — lists used as functional collections run into severe time complexity problems as those collections get large.  I wanted the FSet collections to be as convenient and well-supported as lists, but without the time complexity issues.

— Or rather, I wanted them to be even more convenient than lists.  Before writing FSet, I had spent years working in a little-known proprietary language called Refine, which happened to be implemented on top of Common Lisp, so it was not unusual to switch between the two languages.  And I had noticed something.  In contrast to CL, with its several different predefined equality predicates and with its functions that take :test arguments to specify which one to use, Refine has a single notiion of equality.  The value space is cleanly divided between immutable types, which are compared by value — along with numbers, these include strings, sets, maps, and seqs — and mutable objects, which are always compared by identity.  And it worked!  I found I did not miss the ability to specify an equality predicate when performing an operation such as "union".  It was just never needed.  Get equality right at the language level, and the problem goes away.

Although FSet's compare generic function isn't just for equality — it also defines an ordering that is used by the binary trees — I thought it would probably turn out to be the case that a single global ordering, implemented as a generic function and therefore extensible, would be fine the vast majority of the time.  I think experience has borne this out.  And just as you can mix types in Lisp lists — say, numbers and symbols — without further thought, so you can have any combination of types in an FSet set, effortlessly.  (A project I'm currently working on actually takes considerable advantage of this capability.)

As for supporting equivalent-but-unequal values, this desideratum flows directly from the principle of least astonishment.  While it might not be too surprising for a set or map implementation to fail distinguish the integer 1 from the float 1.0, it certainly would be very surprising, and almost certainly a source of bugs in a compiler that used it, for it to fail to distinguish two uninterned symbols with the same name.  (I saw a macro expansion recently that contained two distinct symbols that both printed as #:NEW.  It happens.)  A compiler using Sycamore for a map on symbols would have to supply a comparison function that accounted for this; it couldn't just compare the package name and symbol name.  (You'd have to do something like keep a weak hash table mapping symbols to integers, assigned in the order in which the comparison function encountered them.  It's doable, but FSet protects you from this madness.)

Along with those deep semantic design choices, I've spent a lot of time on developing a wide and featureful API for FSet (an effort that's ongoing).  FSet has many features that Sycamore lacks, including:

  • seqs, a binary-tree sequence implementation that holds arbitrary Lisp objects (Sycamore ropes hold only characters, which is certainly an important special case, but why restrict ourselves?)
  • default values for maps and seqs (the value to return for an invalid key is associated with the collection, not supplied at the call site; this turns out to be a significant convenience)
  • generic functions that operate on both lists and FSet collections, to shadow the CL builtins
  • the powerful map-union and map-intersection operations (I'll blog about these in the future)
  • more ways to iterate over the collections (the FSet tutorial has a good summary, about 3/4 of the way down)
  • speaking of the tutorial, FSet has lots more documentation

Let me digress slightly to give an example of how FSet makes programming more elegant and convenient.  Joe Marshall just put up a blog post comparing Go(lang) with Common Lisp, which is worth a read on its own; I'm just going to grab a code snippet from there to show a little bit of what programming with FSet is like.  Here's Joe's code:

 (defun collate (items &key (key #'identity) (test #'eql) (merger (merge-adjoin #'eql)) (default nil))
   (let ((table (make-hash-table :test test)))
     (dolist (item items table)
       (let ((k (funcall key item)))
         (setf (gethash k table) (funcall merger (gethash k table default) item))))))

 (defun merge-adjoin (test)
   (lambda (collection item)
     (adjoin item collection :test test)))

And here's what I would write using FSet:

 (defun collate (items &key (key #'identity))
   (let ((result (map :default (set))))
     (dolist (item items result)
       (includef (@ result (funcall key item)) item))))

(Well, I would probably move result outside the dolist form to make it clearer what the return value is, but let's go with Joe's stylistic choice here.)

For those who haven't used FSet: the form (map :default (set)) creates a map whose default is the empty set, meaning that lookups on that map will return the empty set if the key is not in the map.  This saves the includef form from having to handle that possibility.

My version makes assumptions, it's true, about how you want to collect the items with a given key; it doesn't give you other choices.  It could, but what would be the point?  It's already using a general set with better time complexity than lists, and saving you from having to write anything like merge-adjoin.  The extensible global equivalence relation means you're not going to need to supply a :test either.

I think the FSet-enhanced code is cleaner, more elegant, and therefore clearer than the plain-CL version.  Don't you agree?  Maybe you wouldn't say it's a huge improvement, okay, but it's a small example; in a larger codebase, I would argue, these small improvements add up.

* * * * *

To summarize: if you just want a library you can call in a few places for specific purposes, Sycamore might work better for you (but think hard if you're writing a comparator for symbols).  FSet can certainly be used that way, but it can be much more.  If you want to see one way in which Common Lisp can be made into a better language, without giving up anything that we love about it, I urge you to give FSet a try.

FSet has changed the way I write Lisp programs.  — an FSet user

(UPDATE: the magnitude of the performance difference between FSet and Sycamore surprised me, and inspired me to do some profiling of FSet.  It turned out that I could get a 20% speedup on one micro-benchmark simply by adding some inline declarations.  Mea culpa, mea culpa, mea maxima culpa; I should have done this years ago.   With that change, the generic function overhead appears to be the only significant cause of the remaining ~20% performance difference.  I tried creating a Sycamore set using a thin wrapper around fset:compare, and the resulting performance was very similar to that of FSet with its new inlines.)

Joe MarshallLisp vs. golang

· 76 days ago

It's no secret that I'm an aficionado of Lisp. It's my go to language, especially when I don't know what I'm doing. I call it research and prototyping, but it's really just playing around until something works.

We had a need for some auditing of some of our databases at work. They ought to agree with each other and with what GitHub and CircleCI think. It took a couple of weeks part time to prototype a solution in Common Lisp. It showed that the databases were in 99% agreement and found the few points of disagreement and anomalies that we ought to fix or look out for.

I want to integrate this information into a dashboard on one of our tools. I prototyped this by spinning up a Common Lisp microservice that returns the information in JSON format.

But management prefers that new services are written in golang. It would be easier for me to rewrite the service in golang than to try to persuade others to use Common Lisp. It also gives me the opportunity to compare the two languages head to head on a real world problem.

No, this is not a fair comparison. When I wrote the Lisp code I was exploring the problem space and prototyping. I'm much more experienced with Lisp than with golang. The golang version has the advantage that I know what I want to do and how to do it. In theory, I can just translate the Common Lisp code into golang. But then again, this is a “second system” which is not a prototype and has slightly larger scope and fuller requirements. So this cannot be a true head to head comparison.

The first point of comparison is macros (or lack thereof). I generally don't use a lot of macros in Common Lisp, but they come in handy when I do use them. One macro I wrote is called audit-step, which you can wrap around any expresion and it prints out a message before and after the expression is evaluated. The steps are numbered in sequence, and nested steps get nested numbers (like step 2.3.1). If you wrap the major function bodies with this macro, you get a nice trace of the call sequence in the log.

Golang doesn't have macros, but it has first class functions. It's easy enough to write a function that takes a function as an argument and wraps it to output the trace messages. In fact, the macro version in Common Lisp just rewrites the form into such a function call. But the macro version hides a level of indentation and a lambda. In golang, my major functions all start with

func MajorFunction (args) int {
        return AuditStep("MajorFunction", "aux message", func() int {
                // body of MajorFunction
                // Actual code goes here.
        })    
}

The bodies of all my major functions are indented by 16 spaces, which is a little much.

I like higher order functions. I can write one higher order function and parameterize it with functions that handle the specific cases. In my auditing code, one such workhorse function is called collate. It takes a list of objects and creates a table that maps values to all objects in the list that contain that value. To give an example, imaging you have a list of objects that all have a field called foo. The foo field is a string. The collate function can return a table that maps strings to all objects that have that string in the foo field.

collate is very general. It takes a list of objects and four keyword arguments. The :key argument is a function that extracts the value to collate on. The :test argument is a function that compares two keys (it defaults to eql if not specified). The :merger argument is a function to add the mapped object to its appropriate collection in the table (it defaults to adjoin). The :default argument specifies the initial value of a collection in the table (it defaults to nil).

The :merger function is the most interesting. It takes the key and the object and the current value of the table at that key. It returns the new value of the table at that key. The default merger function is adjoin, which adds the object to the collection at the key if it is not already there. But you can specify a different merger function. For example, if you want to count the number of objects at each key, you can specify a merger function that increments a counter.

The functional arguments to the collate function are often the results of other higher order functions. For example, the :key argument is often the result of composing selector functions. The :merger argument is often the result of composing a binary merge function with a unary transformer function. The transformer function is often the result of composing a number of primitive selectors and transformers.

In Common Lisp, it is quite easy to write these higher order functions. We can compose two unary functions with the compose2 function:

(defun compose2 (f g)
  (lambda (x) (funcall f (funcall g x)))

and then compose as many functions as we like by fold-left of compose2 starting with the identity function:

(defun compose (&rest fs)
  (fold-left #'compose2 #'identity fs))

We can compose a binary function with a unary function in three ways: we can pipe the output of the binary function into the unary function, or we can pipe the output of the unary function into one or the other of the inputs of the binary function.

(defun binary-compose-output (f g)
  (lambda (x y) (funcall f (funcall g x y))))

(defun binary-compose-left (f g)
  (lambda (x y) (funcall f (funcall g x) y)))

(defun binary-compose-right (f g)
  (lambda (x y) (funcall f x (funcall g y))))

The collate function can now assume that a lot of the work is done by the :key and :merger functions that are passed in. It simply builds a hash table and fills it:

(defun collate (item &key (key #'identity) (test #'eql) (merger (merge-adjoin #'eql)) (default nil))
  (let ((table (make-hash-table :test test)))
    (dolist (item items table)
      (let ((k (funcall key item)))
        (setf (gethash k table) (funcall merger (gethash k table default) item))))))

(defun merge-adjoin (test)
  (lambda (collection item)
    (adjoin item collection :test test)))

So suppose, for example, that we have a list of records. Each record is a three element list. The third element is a struct that contains a string. We want a table mapping strings to the two element lists you get when you strip out the struct. This is easily done with collate:

(collate records
  :key (compose #'get-string #'third)
  :test #'equal      ; or #'string= if you prefer
  :merger (binary-compose-right (merge-adjoin #'equal) #'butlast))

The audit code reads lists of records from the database and from GitHub and from CircleCI and uses collate to build hash tables we can use to quickly walk and validate the data.

Translating this into golang isn't quite so easy. Golang has first class function, true, but golang is a statically typed language. This causes two problems. First, the signature of the higher order functions includes the types of the arguments and the return value. This means you cannot just slap on the lambda symbol, you have to annotate each argument and the return value. This is far more verbose. Second, higher order functions map onto parameterized (generic) types. Generic type systems come with their own little constraint language so that the computer can figure out what concrete types can correctly match the generic types. This makes higher order functions fairly unweildy.

Consider compose2. The functions f and g each have an input and output type, but the output type of g is the input type of f so only three types are involved

func Compose2[T any, U any, V any](f func(U) V, g func(T) U) func(T) V {
	return func(x T) V {
		return f(g(x))
	}
}

If want to compose three functions, we can write this:

func Compose3[T any, U any, V any, W any](f func(V) W, g func(U) V, h func(T) U) func(T) W {
	return func(x T) W {
		return f(g(h(x)))
	}
}
The generic type specifiers take up as much space as the code itself.

I don't see a way to write an n-ary compose function. It would have to be dynamically parameterized by the intermediate types of all the functions it was composing.

For the collate function, we can write this:

func Collate[R any, K comparable, V any](
	list *Cons[R],
	keyfunc func(R) K,
	merger func(V, R) V,
	defaultValue V) map[K]V {
	answer := make(map[K]V)
	for list != nil {
		key := keyfunc(list.Car)
		probe, ok := answer[key]
		if !ok {
			probe = defaultValue
		}
		answer[key] = merger(probe, list.Car)
		list = list.Cdr
	}
	return answer
}

We have three types to parameterize over: the type of the list elements (i.e. the record type) R, the type of the key K, and the type of the value V. The key type is needs to be constrained to be a valid key in a map, so we use the comparable constraint. Now that we have the types, we can annotate the arguments and return value. The list we are collating is a list of R elements. The key function takes an R and returns a K. The merger takes an existing value of type V and the record of type R and returns a new value of type V.

The magic of type inference means that I do not have to annotate all the variables in the body of the function, but the compiler cannot read my mind and infer the types of the arguments and return value. Golang forces you to think about the types of arguments and return values at every step of the way. Yes, one should be aware of what types are being passed around, but it is a burden to have to formally specify them at every step. I could write the Common Lisp code without worrying too much about types. Of couse the types would have to be consistent at runtime, but I could write the code just by considering what was connected to what. In golang, the types are in your face at every function definition. You not only have to think about what is connected to what, you have to think about what sort of thing is passed through the connection.

I'm sure that many would argue that type safety is worth the trouble of annotation. I don't want to argue that it isn't. But the type system is cumbersome, awkward, and unweildy, especially when you are trying to write higher order functions.

It is taking me longer to write the golang version of the audit service than it did to write the Common Lisp version. There are several reasons. First, I am more experienced with Common Lisp than golang, so the right Common Lisp idioms just come to mind. I have to look up many of the golang idioms. Second, the golang code is trying to do more than the Common Lisp code. But third, golang itself introduces more friction than Common Lisp. Programs have to do more than express the algorithm, they have to satisfy the type system.

There are more points of comparison between the two languages. When I get frustrated enough, I'll probably write another post.

Quicklisp newsOctober 2024 Quicklisp dist update now available

· 77 days ago

 New projects: 

  • adp-github — ADP extension to generate github markdown files. — MIT
  • adp-plain — Add Documentation, Please... using plain text. An extension of ADP to generate files with barely additional features. — MIT
  • allioli — Alliolification — MIT
  • alternate-asdf-system-connections — Allows for ASDF system to be connected so that auto-loading may occur. This is a fork of asdf-system-connections and incorporates a load-system-driven mechanism for loading dependencies and also loads the dependencies of the connections. — MIT
  • cbor — CBOR encoder/decoder — MIT
  • charje.documentation — Documentation is an opinionated yet customizable docstring parsing library. — AGPL V3 or any later version
  • chipi — House automation bus in Common Lisp — Apache-2
  • cl-aseprite — Aseprite file format parser — GPLv3
  • cl-astar — A heavily optimized yet flexible A* pathfinding algorithm implementation — MIT
  • cl-ceigen-lite — A Common Lisp wrapper around CEIGEN-LITE - which is itself a C wrapper around the C++ Eigen library. — MIT
  • cl-cf — Computations using continued fractions — GPL-3
  • cl-concord — CONCORD implementation based on Common Lisp — LGPL
  • cl-duckdb — CFFI wrapper around the DuckDB C API — MIT License
  • cl-fastcgi — FastCGI wrapper for Common Lisp — BSD License
  • cl-flx — Rewrite emacs-flx in Common Lisp — MIT
  • cl-frugal-uuid — Common Lisp UUID library with zero dependencies — MIT License
  • cl-gog-galaxy — A wrapper for the GOG Galaxy SDK — zlib
  • cl-lc — List comprehensions — MIT
  • cl-naive-ptrees — Functions to make it easier to work with plist(s) and plist trees. Works with plist(s) pairs as units and not as individual list items. — MIT
  • cl-qoa — An implementation of the Quite Okay Audio format. — zlib
  • cl-reddit — Reddit client api library — BSD
  • cl-resvg — An up-to-date bindings library for the resvg SVG rendering library — zlib
  • cl-trivial-clock — Common Lisp library to get accurate wall-clock times on multiple platforms — MIT License
  • clack-cors — A Clack middleware to set CORS related HTTP headers. — Unlicense
  • clack-prometheus — Clack middleware to serve stats in Prometheus format. — Unlicense
  • clith — Common Lisp wITH macro. A general WITH macro. — MIT
  • clj-arrows — Implements Clojure-styled threading/transformation macros. — MIT
  • clos-encounters — A collection of OOP patterns benefiting from the CLOS MOP. — Unlicense
  • coalton — An efficient, statically typed functional programming language that supercharges Common Lisp. — MIT
  • cocoas — A toolkit library to help deal with CoreFoundation, Cocoa, and objc — zlib
  • com.danielkeogh.graph — A fast an reliable graph library. — MIT
  • fast-mpsc-queue — Multi-Producer Single-Consumer queue implementation. — MIT
  • file-finder — File finder. Enable rapid file search, inspection and manipulation. — GPL3+
  • golden-utils — A utility library. — MIT
  • hiccl — HTML generator for Common Lisp — MIT
  • hsx — Hypertext S-expression — MIT
  • hunchentoot-stuck-connection-monitor — Monitors hunchentoot connections and logs the connections stuck in the same state for a long time (due to slow or inactive clients and network stream timeouts that hunchentoot tries to utilize not working properly). Offers an option to shutdown the stuck connections sockets manually or automatically, thus unblocking the connection threads and preventing thread and socket leak. See https://github.com/edicl/hunchentoot/issues/189 — BSD-2-Clause
  • incless — A portable and extensible Common Lisp printer implementation (core) — BSD
  • inravina — A portable and extensible Common Lisp pretty printer. — MIT
  • invistra — A portable and extensible Common Lisp FORMAT implementation — BSD
  • knx-conn — KNXnet/IP implementation in Common Lisp — GNU GPL, version 3
  • machine-state — Retrieve machine state information about CPU time, memory usage, etc. — zlib
  • myweb — simple web server written in common lisp for educational reasons — LGPLv3
  • noisy — Perlin noise for arbitrary numbers of dimensions. — MIT
  • nontrivial-gray-streams — A compatibility layer for Gray streams including extensions — MIT
  • open-with — Open a file in a suitable external program — zlib
  • openai-openapi-client — Openai API client — AGPLv3+
  • openrpc — CI for Common Lisp OpenRPC library. — BSD
  • parse-number-range — Parses LOOP's convenient "for-as-arithmetic" syntax into 5 simple values: from, to, limit-kind (:inclusive, :exclusive or nil if unbounded), by (step) and direction (+ or -)). Further related utilities are provided. Intended for easy implementation of analogous functionality in other constructs. — Public Domain
  • precise-time — Precise time measurements — zlib
  • pregexp — Portable regular expressions for Common Lisp — MIT-like
  • progressons — Display a progress bar on one line. — MIT
  • quaviver — A portable and extensible floating point string library — MIT
  • quilc — A CLI front-end for the Quil compiler — Apache License 2.0 (See LICENSE.txt)
  • qvm — An implementation of the Quantum Abstract Machine. — Apache License 2.0 (See LICENSE.txt)
  • random-sampling — Functions to generate random samples with various distributions — zlib
  • rs-dlx — Knuth's Algorithm X with dancing links. — Modified BSD License
  • scrapycl — The web scraping framework for writing crawlers in Common Lisp. — Unlicense
  • smoothers — Statistical methods to create approximating functions that attempt to capture important patterns in the data, while leaving out noise or other fine-scale structures/rapid phenomena. — MS-PL
  • trivial-adjust-simple-array — A tiny utility to change array size ensuring it is simple. — MIT
  • trivial-system-loader — A system installation/loading abstraction for Common Lisp — MIT
  • trivial-toplevel-commands — Trivial Toplevel Commands allows to define toplevel commands available on most implementations in a portable fashion. — BSD-3 Clause
  • trivial-toplevel-prompt — Portability library to customize REPL prompts. — BSD-3 Clause
  • utf8-input-stream — A UTF-8 string input stream over a binary stream for Common Lisp — MIT
  • whereiseveryone.command-line-args — Automatically create a command-line-argument parser for a given Common Lisp function definition. — AGPL v3 or any later version

Updated projects: 3b-bmfont, 3bgl-shader, 3bmd, 3d-math, 3d-spaces, 40ants-asdf-system, 40ants-slynk, access, acclimation, action-list, adhoc, adopt, adp, agnostic-lizard, alexandria, alexandria-plus, anatevka, anypool, april, arc-compat, architecture.builder-protocol, array-utils, arrow-macros, assoc-utils, async-process, atomics, auto-restart, aws-sdk-lisp, babel, bdef, bike, binary-structures, binding-arrows, birch, blackbird, bordeaux-threads, calm, carrier, caveman, ccldoc, cephes.cl, cepl, cerberus, cffi, cffi-object, cffi-ops, chanl, chunga, ci, ci-utils, ciao, cl-6502, cl-algebraic-data-type, cl-all, cl-ansi-term, cl-async, cl-atelier, cl-autowrap, cl-base32, cl-bmas, cl-bmp, cl-bnf, cl-brewer, cl-buchberger, cl-cmark, cl-collider, cl-colors2, cl-confidence, cl-containers, cl-cookie, cl-csv, cl-custom-hash-table, cl-cxx-jit, cl-data-structures, cl-dbi, cl-digraph, cl-dot, cl-enchant, cl-environments, cl-fast-ecs, cl-fbx, cl-fluent-logger, cl-form-types, cl-forms, cl-freetype2, cl-gamepad, cl-github-v3, cl-gltf, cl-gobject-introspection, cl-graph, cl-grip, cl-gserver, cl-hamcrest, cl-hash-util, cl-html-readme, cl-i18n, cl-info, cl-ini, cl-ipfs-api2, cl-kanren, cl-lib-helper, cl-liballegro, cl-liballegro-nuklear, cl-log, cl-markless, cl-marshal, cl-migratum, cl-mixed, cl-modio, cl-mount-info, cl-mpg123, cl-mssql, cl-mustache, cl-mysql, cl-neovim, cl-netpbm, cl-oju, cl-opengl, cl-opensearch-query-builder, cl-opus, cl-patterns, cl-plus-ssl-osx-fix, cl-ppcre, cl-project, cl-protobufs, cl-pslib, cl-pslib-barcode, cl-rashell, cl-readline, cl-sat.minisat, cl-sdl2-image, cl-sdl2-mixer, cl-sdl2-ttf, cl-sendgrid, cl-sentry-client, cl-skkserv, cl-smtp, cl-ssh-keys, cl-steamworks, cl-str, cl-svg, cl-telegram-bot, cl-threadpool, cl-tiled, cl-torrents, cl-tqdm, cl-transducers, cl-transit, cl-unicode, cl-unification, cl-unix-sockets, cl-utils, cl-vectors, cl-vorbis, cl-wavefront, cl-webdriver-client, cl-webkit, cl-webmachine, cl-who, clack, clack-pretend, clad, classimp, clast, clath, clavier, clazy, clerk, clgplot, climacs, clingon, clip, clj-con, clj-re, clobber, clog, clog-ace, clog-collection, clog-plotly, clog-terminal, clohost, closer-mop, clss, cluffer, clunit2, clx, cmd, codata-recommended-values, codex, coleslaw, collectors, colored, com-on, common-lisp-jupyter, commondoc-markdown, compiler-macro-notes, conduit-packages, consfigurator, contextl, croatoan, ctype, cytoscape-clj, damn-fast-priority-queue, dartscluuid, data-frame, data-lens, datafly, dbus, decompress, defenum, definer, definitions, deflate, defmain, deploy, depot, deptree, dexador, dissect, djula, dns-client, doc, docs-builder, dsm, dufy, easter-gauss, easy-audio, easy-macros, easy-routes, eclector, equals, erjoalgo-webutil, erudite, esrap, event-emitter, external-program, external-symbol-not-found, fare-csv, fare-scripts, fast-http, fast-websocket, file-attributes, file-notify, file-select, filesystem-utils, fiveam, fiveam-matchers, flexi-streams, float-features, flow, fn, fset, functional-trees, fuzzy-dates, gadgets, generic-cl, github-api-cl, glfw, glsl-toolkit, harmony, hashtrie, helambdap, http2, hunchentoot, imago, in-nomine, inferior-shell, introspect-environment, ironclad, jose, js, json-mop, jsonrpc, jzon, khazern, lack, lass, lemmy-api, letv, lichat-protocol, lichat-tcp-client, linear-programming, lisp-binary, lisp-chat, lisp-critic, lisp-pay, lisp-stat, lispcord, lla, local-time, log4cl-extras, logging, lru-cache, magicl, maiden, maidenhead, manifolds, math, mcclim, memory-regions, messagebox, method-combination-utilities, mgl-pax, misc-extensions, mito, mk-defsystem, mmap, mnas-package, mnas-string, moira, multiposter, mutility, mutils, named-closure, ndebug, neural-classifier, new-op, nibbles, nibbles-streams, ningle, nodgui, north, numerical-utilities, nytpu.lisp-utils, omglib, ook, open-location-code, openapi-generator, orizuru-orm, overlord, papyrus, parachute, parse-number, pathname-utils, petalisp, phos, picl, plot, plump, plump-sexp, pngload, policy-cond, polymorphic-functions, postmodern, ppath, prometheus-gc, psychiq, purgatory, py4cl, py4cl2, py4cl2-cffi, qlot, qoi, query-fs, quick-patch, quickhull, quri, random-state, reblocks, reblocks-auth, reblocks-file-server, reblocks-lass, reblocks-navigation-widget, reblocks-parenscript, reblocks-prometheus, reblocks-typeahead, reblocks-ui, reblocks-websocket, rove, s-dot2, sandalphon.lambda-list, sb-fastcgi, sc-extensions, sel, select, serapeum, shasht, shop3, si-kanren, sketch, slime, slite, sly, snooze, spinneret, staple, static-vectors, statistics, stepster, stmx, stripe, swank-crew, swank-protocol, sxql, symath, system-locale, taglib, teddy, ten, testiere, tfeb-lisp-hax, tfm, tiny-routes, tooter, trivia, trivial-arguments, trivial-clipboard, trivial-file-size, trivial-gray-streams, trivial-main-thread, trivial-octet-streams, trivial-package-locks, trivial-package-manager, trivial-sanitize, trivial-shell, type-templates, typo, uax-15, uiop, usocket, vellum, vellum-binary, vellum-csv, vellum-postmodern, verbose, vernacular, vom, websocket-driver, winhttp, with-branching, with-contexts, woo, xhtmlambda, xml-emitter, yason, zippy, zpb-ttf.

Removed projects: abstract-arrays, ahungry-fleece, cl-cheshire-cat, cl-darksky, cl-epoch, cl-naive-store, convolution-kernel, dense-arrays, extensible-compound-types, extensible-optimizing-coerce, fast-generic-functions, flac-metadata, freebsd-ffi, listoflist, luckless, one-more-re-nightmare, postmodern-localtime, stumpwm-dynamic-float, stumpwm-sndioctl, unicly.

To get this update, use:

 (ql:update-dist "quicklisp")

Sorry this update took so long. My goal is to resume monthly releases.

Enjoy!

Patrick SteinRay Tracing In One Weekend (in Lisp, and n-dimenions)

· 96 days ago

Earlier this year, I started working through the online book Ray Tracing In One Weekend (Book 1). I have been following along with it in Common Lisp, and I have been extending it all from 3-dimensional to n-dimensional.

I reproduced 4-dimensional versions of all of the book images which you can see on my weekend-raytracer github page.

Here is the final image. This is a 250-samples-per-pixel, 640x360x10 image plane of three large hyperspheres (one mirrored, one diffuse, one glass) atop a very large, diffuse hypersphere. Also atop this very large hypersphere are a bunch of smaller hyperspheres of varying colors and materials. The image is rendered with some defocus-blur.

Image described in detail in post.Final image of 4-dimensional scene

Caveat: This depends on a patched version of the policy-cond library that is not in the current Quicklisp distribution but should be in the next.

The post Ray Tracing In One Weekend (in Lisp, and n-dimenions) first appeared on nklein software.

Eugene ZaikonnikovBreaking the Kernighan's Law

· 108 days ago

"Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it.." — Brian W. Kernighan.

I'm a sucker for sage advice much as anyone else, and Kernighan is certainly right on money in the epigraph. Alas there comes a time in programmer's career when you just end up there despite the warning. It could be that you were indeed too clever for your own good, or maybe the code isn't quite yours anymore after each of your colleague's take on it over the years. Or just sometimes, the problem is indeed so hard that it strains your capacity as a coder.

It would usually start with a reasonable idea made into first iteration code. The solution looks fundamentally sound but then as you explore the problem space further it begins to seep nuance, either as manifestation of some real world complexity or your lack of foresight. When I run into this my first instinct is to instrument the code. If the problem is formidable you got to respect it: flailing around blindly modifying things or ugh, doing a rewrite at this stage is almost guaranteed to be a waste of time. It helps to find a promising spot, chisel it, gain a foothold in the problem, and repeat until you crack it. Comfortable debugging tools here can really help to erode the original Kernighan coefficient from 2 to maybe 1.6 or 1.4 where you can still have a chance.

Lisp users are fortunate with the options of interactive debugging, and one facility I reach often for is the plain BREAK. It's easy enough to wrap it into a conditional for particular matches you want to debug. However sometimes you want it to trigger after a particular sequence of events across different positions in code has taken place. While still doable it quickly becomes cumbersome and this state machine starts to occupy too much mental space which is already scarce. So one day, partly as a displacement activity from being intimidated by a Really Hard Problem I wrote down my debugging patterns as a handful of macros.

Enter BRAKE. Its features reflect my personal preferences so are not necessarily your cup of tea but it could be a starting point to explore in this direction. Things it can do:

  • act as a simple BREAK with no arguments (duh)
  • wrap an s-expression, passing through its values upon continuing
  • trigger sequentially based on the specified position for a common tag
  • allow for marks that don't trigger the break but mark the position as reached
  • provide conditional versions for the expressions above
  • print traces of tagged breakpoints/marks

If you compile functions with debug on you hopefully should be able to see the wrapped sexpr's result values.

(use-package '(brake))

(defun fizzbuzz ()
  (loop for n from 100 downto 0
	for fizz = (zerop (mod n 3))
	for buzz = (zerop (mod n 5)) do
	(format t "~a "
		(if (not (or fizz buzz))
		    (format nil "~d" n)
		  (brake-when (= n 0)
			      (concatenate 'string
					   (if fizz "Fizz" "")
					   (if buzz "Buzz" "")))))))

These macros try to detect common cases for tagged sequences being either aborted via break or completed to the last step, resetting them after to the initial state. However it is possible for a sequence to end up "abandoned", which can be cleaned up by a manual command.

Say in the example below we want to break when the two first branches were triggered in a specific order. The sequence of 1, 3, 4 will reinitialize once the state 4 is reached, allowing to trigger continuously. At the same time if we blow our stack it should reset to initial when aborting.

(defun ack (m n)
  (cond ((zerop m) (mark :ack 3 (1+ n)))
        ((zerop n) (mark :ack 1 (ack (1- m) 1)))
        (t (brake :ack 4 (ack (1- m) (ack m (1- n)))))))

In addition there are a few utility functions to report on the state of brakepoints, enable or disable brakes based on tags and turn tracing on or off. Tracing isn't meant to replace the semantics of TRACE but to provide a souped up version of debug by print statements everyone loves.

CL-USER> (report-brakes)
Tag :M is DISABLED, traced, with 3 defined steps, current state is initial
Tag :F is DISABLED with 2 defined steps, current state is 0
Tag :ACK is ENABLED with 3 defined steps, current state is initial

Disabling breakpoints without recompilation is really handy and something I find using all the time. The ability to wrap a sexpr was often sorely missed when using BREAK in constructs without implicit body.

Sequencing across threads is sketchy as the code isn't guarded but in many cases it can work, and the appeal of it in debugging races is clear. One of those days I hope to make it more robust while avoiding potential deadlocks but it isn't there yet. Where it already shines tho is in debugging complex iterations, mutually recursive functions and state machines.

Patrick SteinI Broke It

· 110 days ago

Unfortunately, I managed to delete my WordPress database at a time when the most recent backup I had was from 11 years ago.

So… I will hopefully get some newer information uploaded again sometime.

But, most of my content is gone. 🙁

Scott Burson pointed out that The Wayback Machine has my content: http://web.archive.org/web/20240901104727/https://nklein.com/

And, Daniel Brooks pointed out that NewsBlur has the stuff from my RSS feeds: https://www.newsblur.com/site/101509/nklein-software

I will eventually get to moving some of that back here properly.

The post I Broke It first appeared on nklein software.


For older items, see the Planet Lisp Archives.


Last updated: 2025-01-01 02:37