The Picube team is a joint project-team of INRIA, Université Paris-Cité and CNRS, within the IRIF's Proofs, Programs and Systems pole. It covers five main research themes:
The Picube team wishes to take advantage of recent advances in the fields of:
in order to reduce the gap between the vernacular language currently used by mathematicians in their daily practice and the formal language used today in proof assistants such as Coq, Agda or Lean.
The research project builds on the knowledge and expertise accumulated in the Pi.R2 team and integrates new ingredients in the direction of certified mathematics, differential and probabilistic programming and learning, with a view to tackling the above themes.
We describe the contributions in each of the five research directions of the team.
Proof theory is the branch of logic devoted to the study of the structure of proofs. An essential contributor to this field is Gentzen 54 who developed in 1935 two logical formalisms that are now central to the study of proofs. These are the so-called “natural deduction”, a syntax that is particularly well-suited to simulate the intuitive notion of reasoning, and the so-called “sequent calculus”, a syntax with deep geometric properties that is particularly well-suited for proof automation.
Proof theory gained a remarkable importance in computer science when it
became clear, after genuine observations first by Curry in
1958 46, then by Howard and de Bruijn at the end of the
60's 65, 33, that proofs had the very same
structure as programs: for instance, natural deduction proofs can be
identified as typed programs of the ideal programming language known
as
The
To explain the Curry-Howard correspondence, it is important to
distinguish between intuitionistic and classical logic: following
Brouwer at the beginning of the 20th century,
classical logic is a logic that accepts the use of reasoning by
contradiction while intuitionistic logic proscribes it. Then,
Howard's observation is that the proofs of the intuitionistic natural
deduction formalism exactly coincide with
programs in the (simply typed)
A major achievement has been accomplished by Martin-Löf who designed in 1971 a formalism, referred to as modern type theory, that was both a logical system and a (typed) programming language 68.
In 1985, Coquand and Huet 45, 43 in the Formel
team of INRIA-Rocquencourt explored an alternative approach
based on Girard-Reynolds' system
The first public release of CoC dates back to 1989. The same project-team developed the programming language Caml (nowadays called OCaml and coordinated by the Gallium team) that provided the expressive and powerful concept of algebraic data types (a paragon of it being the type of lists). In CoC, it was possible to simulate algebraic data types, but only through a not-so-natural not-so-convenient encoding.
In 1989, Coquand and Paulin 44 designed an extension of the Calculus of Constructions with a generalisation of algebraic types called inductive types, leading to the Calculus of Inductive Constructions (CIC) that started to serve as a new foundation for the Coq system. This new system, which got its current definitive name Coq, was released in 1991.
In practice, the Calculus of Inductive Constructions derives its strength from being both a logic powerful enough to formalise all common mathematics (as set theory is) and an expressive richly-typed functional programming language (like ML but with a richer type system, no effects and no non-terminating functions).
In this research topic, we want to design and implement
an incremental and probabilistic notion of mathematical document
amenable to statistical learning methods.
We will rely on differential, probabilistic and metric extensions
of Martin-Löf dependent type theory, the formal system
on which the Coq proof assistant is implemented.
Our first objective will be to develop a type theoretic and compositional framework for data and probabilistic programs, taking into account independence, distance and expectation for probabilistic distributions of data and/or programs. We will build on two recent advances in the field. First, the introduction of probabilistic programming languages for differential privacy based on ideas of linear logic, and equipped with a compositional and typed metrics measuring distance between programs as well as between large-scale data 52, 34, 22. Second, the development of differential extensions of functional programming languages allowing to implement naturally optimisation algorithms based on gradient retropropagation 20, 35 of which we wish to explore the possible connections with differential linear logic 50.
We shall develop a metric point of view on the homotopical framework of Martin-Löf Type Theory 71 so as to be able to define behavioural and observational distances between proofs in types, and between types in universes. We shall use the recent fibrational characterisation of the Kantorovich-Wasserstein distance on probability distributions 31 in order to lift these metrics to distributions within Type Theory. Our goal is to obtain in that way distances that we shall be able to evaluate and optimise between distributions of proofs/elements in a type, or distributions of types in a Martin-Löf universe.
One of our objectives will then be to understand how to calibrate so defined distances within Type Theory using mathematical information retrieval (MIR) algorithms based on distances obtained by automatic learning methods 21. We will also try to articulate concurrent separation logic with Type Theory in order to give a compositional account of the dependence and independence of the various components of the probability distributions on data, proofs and concepts. We shall work in the spirit of 28, 25 building on the formal correspondence observed by Alex Simpson 76 between separation between memory states of a machine and independence between random variables.
We consider possible interactions with François Pottier (Cambium) and Arthur Charguéraud (Camus) on these questions of concurrent separation logic and its correct integration within Martin-Löf Type Theory.
One of the most innovating and federating aspects of the project will be to
conceive and implement a formal notion of mathematical document
and its connections with underlying logical theories,
in the line of the recent advances by Makarius Wenzel in
Isabelle/PIDE 80.
Specifically this framework will include projection based
mathematical content extraction tools allowing to build, out of a
mathematical document, the libraries relevant to a given theorem or
proof.
In that way the users will be able to know in which logical fragment
they are working at each step of their mathematical activities, and
the relations between the various components of a library.
We will devote a special care to the notion of transformation
paths which connect mathematical documents by successive
applications of “patches”, in the spirit of git, darcs or
Pijul 69, 37, 23.
We shall formalise and implement this notion of transformation path in
such a way that we shall be able to compose them efficiently, whilst
endowing them with a distance compatible with the probabilistic and
metric approach to Type Theory explained in the previous paragraphs.
This incremental point of view on mathematical proof
construction will allow us to set up learning tools based on statistical
analysis of the behaviour of users in the way they build
proofs 73 rather than on the
form of the proofs themselves.
A key principle of our mathematical document format is to serve as a
good basis for understanding sets of concepts, theorems, and proofs
and their evolution as a data set that could be used by state of the
art learning methods 53 to
help the document and proof writers. In this direction, there are many
possibilities, including improving search, type-checking and
conversion, and suggestions on proof structure and tactics.
We consider working in cooperation with Vincent Silès (Facebook Paris)
on the use of learning tools and the automated and interactive
guidance of users.
During 1984-2012 period, about 40 persons have contributed to the development of Coq, out of which 7 persons have contributed to bring the system to the place it was six years ago. First Thierry Coquand through his foundational theoretical ideas, then Gérard Huet who developed the first prototypes with Thierry Coquand and who headed the Coq group until 1998, then Christine Paulin who was the main actor of the system based on the CIC and who headed the development group from 1998 to 2006. On the programming side, important steps were made by Chet Murthy who raised Coq from the prototypical state to a reasonably scalable system, Jean-Christophe Filliâtre who turned to concrete the concept of a small trustful certification kernel on which an arbitrary large system can be set up, Bruno Barras and Hugo Herbelin who, among other extensions, reorganised Coq on a new smoother and more uniform basis able to support a new round of extensions for the next decade.
The development started from the Formel team at Rocquencourt but, after Christine Paulin got a position in Lyon, it spread to École Normale Supérieure de Lyon. Then, the task force there globally moved to the University of Orsay when Christine Paulin got a new position there. On the Rocquencourt side, the part of Formel involved in ML moved to the Cristal team (now Gallium) and Formel got renamed into Coq. Gérard Huet left the team and Christine Paulin started to head a Coq team bilocalised at Rocquencourt and Orsay. Gilles Dowek became the head of the team which was renamed into LogiCal. Following Gilles Dowek who got a position at École Polytechnique, LogiCal moved to the new INRIA Saclay research center. It then split again, giving birth to ProVal. At the same time, the Marelle team (formerly Lemme, formerly Croap) which has been a long partner of the Formel team, invested more and more energy in the formalisation of mathematics in Coq, while contributing importantly to the development of Coq, in particular for what regards user interfaces.
After various other spreadings resulting from where the wind pushed former PhD students, the development of Coq got multi-site with the development now realised mainly by employees of INRIA, the CNAM, and Paris Diderot.
In the last seven years, Hugo Herbelin and Matthieu Sozeau coordinated the development of the system, the official coordinator hat passed from Hugo to Matthieu in August 2016. The ecosystem and development model changed greatly during this period, with a move towards an entirely distributed development model, integrating contributions from all over the world. While the system had always been open-source, its development team was relatively small, well-knit and gathered regularly at Coq working groups, and many developments on Coq were still discussed only by the few interested experts.
The last years saw a big increase in opening the development to external scrutiny and contributions. This was supported by the “core” team which started moving development to the open GitHub platform (including since 2017 its bug-tracker 81 and wiki), made its development process public, starting to use public pull requests to track the work of developers, organising yearly hackatons/coding-sprints for the dissemination of expertise and developers & users meetings like the Coq Workshop and CoqPL, and, perhaps more anecdotally, retransmitting Coq working groups on a public YouTube channel.
This move was also supported by the hiring of Maxime Dénès in 2016 as an INRIA research engineer (in Sophia-Antipolis), and the work of Matej Košík (2-year research engineer). Their work involved making the development process more predictable and streamlined and to provide a higher level of quality to the whole system. In 2018, a second engineer, Vincent Laporte, was hired. Yves Bertot, Maxime Dénès and Vincent Laporte are developing the Coq consortium, which aims to become the incarnation of the global Coq community and to offer support for our users.
Today, the development of Coq involves participants from the INRIA project-teams Picube (Paris), Stamp (Sophia-Antipolis), Toccata (Saclay), Gallinette (Nantes), Gallium (Paris), and Camus (Strasboug), the LIX at École Polytechnique and the CRI Mines-ParisTech. Apart from those, active collaborators include members from MPI-Saarbrucken (D. Dreyer's group), KU Leuven (B. Jacobs group), MIT CSAIL (A. Chlipala's group, which hosted an INRIA/MIT engineer, and N. Zeldovich's group), the Institute for Advanced Study in Princeton (from S. Awodey, T. Coquand and V. Voevodsky's Univalent Foundations program) and Apple (M. Soegtrop). The latest released versions have typically a couple of dozens of contributors (e.g. 40 for 8.8, 54 for 8.9, ...).
On top of the developer community, there is a much wider user community, as Coq is being used in many different fields. The Software Foundations series, authored by academics from the USA, along with the reference Coq'Art book by Bertot and Castéran 30, the more advanced Certified Programming with Dependent Types book by Chlipala 40 and the recent book on the Mathematical Components library by Mahboubi, Tassi et al. provide resources for gradually learning the tool.
In the programming languages community, Coq is being taught in two summer schools, OPLSS and the DeepSpec summer school. For more mathematically inclined users, there are regular Winter Schools in Nice and in 2017 there was a school on the use of the Univalent Foundations library in Birmingham.
Since 2016, Coq also provides a central repository for Coq packages,
the Coq opam archive, relying on the OCaml opam package manager and including
around 250 packages contributed by users. It would be too long
to make a detailed list of the uses of Coq in the wild. We only highlight
four research projects relying heavily on Coq. The Mathematical Components library has its origins in the formal
proof of the Four Colour Theorem and has grown to cover many areas of mathematics in Coq
using the now integrated (since Coq 8.7) SSReflect proof language.
The DeepSpec project is an NSF Expedition project led by
A. Appel whose aim is full-stack verification
of a software system, from machine-checked proofs of circuits to an operating system to a
web-browser, entirely written in Coq and integrating many large projects into one. The ERC CoqHoTT project led by N. Tabareau
aims to use logical tools to extend the expressive power of Coq, dealing with the univalence axiom and
effects. The ERC RustBelt project led by D. Dreyer concerns the development of rigorous formal foundations for the Rust programming language, using the Iris Higher-Order Concurrent Separation Logic Framework in Coq.
We next briefly describe the main components of Coq.
The architecture adopts the so-called de Bruijn principle: the well-delimited kernel
of Coq ensures the correctness
of the proofs validated by the system. The kernel is rather stable
with modifications tied to the evolution of the underlying Calculus of
Inductive Constructions formalism. The kernel includes an
interpreter of the programs expressible in the CIC and this
interpreter exists in two flavours: a customisable lazy evaluation
machine written in OCaml and a call-by-value bytecode interpreter
written in C dedicated to efficient computations. The kernel also
provides a module system.
The concrete user language of Coq, called Gallina, is a
high-level language built on top of the CIC. It includes a type
inference algorithm, definitions by complex pattern-matching, implicit
arguments, mathematical notations and various other high-level
language features. This high-level language serves both for the
development of programs and for the formalisation of mathematical
theories. Coq also provides a large set of commands. Gallina and
the commands together forms the Vernacular language of Coq.
The standard library is written in the vernacular language of Coq.
There are libraries for various arithmetical structures and various
implementations of numbers
(Peano numbers, implementation of
The tactics are the methods available to conduct proofs. This includes the basic inference rules of the CIC, various advanced higher level inference rules and all the automation tactics. Regarding automation, there are tactics for solving systems of equations, for simplifying ring or field expressions, for arbitrary proof search, for semi-decidability of first-order logic and so on. There is also a powerful and popular untyped scripting language for combining tactics into more complex tactics.
Note that all tactics of Coq produce proof certificates that are checked by the kernel of Coq. As a consequence, possible bugs in proof methods do not hinder the confidence in the correctness of the Coq checker. Note also that the CIC being a programming language, tactics can have their core written (and certified) in the own language of Coq if needed.
Extraction is a component of Coq that maps programs (or even computational proofs) of the CIC to functional programs (in OCaml, Scheme or Haskell). Especially, a program certified by Coq can further be extracted to a program of a full-fledged programming language then benefiting of the efficient compilation, linking tools, profiling tools, ... of the target language.
Coq is a feature-rich system and requires extensive training in order to be used proficiently; current documentation includes the reference manual, the reference for the standard library, as well as tutorials, and related tooling [sphinx plugins, coqdoc]. The jsCoq tool allows writing interactive web pages were Coq programs can be embedded and executed.
Coq is used in large-scale proof developments, and provides users miscellaneous tooling to help with them: the coq_makefile and Dune build systems help with incremental proof-checking; the Coq OPAM repository contains a package index for most Coq developments; the CoqIDE, ProofGeneral, jsCoq, and VSCoq user interfaces are environments for proof writing; and the Coq's API does allow users to extend the system in many important ways. Among the current extensions we have QuickChik, a tool for property-based testing; STMCoq and CoqHammer integrating Coq with automated solvers; ParamCoq, providing automatic derivation of parametricity principles; MetaCoq for metaprogramming; Equations for dependently-typed programming; SerAPI, for data-centric applications; etc... This also includes the main open Coq repository living at Github.
In this research topic, we aim at contributing to the formalisation of mathematics by developing direct interactions with the mathematical community. The current period is more than ever propitious for involving colleagues from mathematics in a formalisation process. Indeed, more and more mathematicians express a strong interest for the formalisation and growing expectations for the benefits it could have for their research. We view this as a direct consequence of the maturity gradually acquired by proof assistants together with the impressive work of conviction carried out by late Vladimir Voevodsky around HoTT, and to striking results such as the formalisation by Georges Gonthier of the four-colour theorem 58 and Feit-Thompson theorem 57, 56.
However, to actually formalise a large body of contemporary mathematics remains truly a research issue as it requires to improve the design of proof assistants. Most notably, there is a need for a true work of linguists in order to fill the gap between vernacular proof languages and formal proofs, which is a requirement for fostering a dynamic and sustainable community ranging from computer science to pure mathematics. In addition to the skills and energies gathered in the team itself, we benefit from our unique position in the Sophie Germain building of the Université de Paris, within the IRIF and in the immediate neighbourhood of IMJ-PRG, a laboratory in pure mathematics. The activity of the team in this topic will focus on lowering the cost of starting and pursuing formalisation by mathematicians.
We will work for this in a proactive way, in close collaboration with IRIF and IMJ-PRG gathering motivated mathematicians and computer scientists willing to formalise the mathematics they teach at the University, or on which they conduct their research work. Indeed, we aim at formalising classical mathematics curricula, pieces of contemporary mathematics as well as mathematical tools implemented in theoretical computer science. We will draw inspiration from the usually heuristic practice of mathematics, aiming to make the writing and reading of Coq documents altogether more intuitive, more straightforward and more flexible. This software development and mechanisation work will be combined with a study of the linguistic structure of mathematical documents, in collaboration with Philippe de Groote (Sémagramme): we think that there is a need to better understand the linguistic structures at work in the daily life of a mathematician and their formal nature.
More specifically, among the subjects of formalisation of mathematics we plan to carry in the team, three in particular should be mentioned:
This research topic will therefore be based at the same time on a formalisation activity internal to the team, and on a long-term work of animation and construction of a scientific community involved in formalisation. We plan to contribute to the Coq training of our maths and CS colleagues (from PhD students to post-docs and those holding a permanent position), and not only grad students as it is more commonly the case, in particular through the organisation of regular sessions of working groups dedicated to helping colleagues in their formalisation tasks and by considering the opportunity to set up thematic schools in collaboration with the other teams of the institute contributing to Coq. A medium-term objective could be to achieve that some pure mathematics modules are based directly on formalised content and that some mathematics tutorials take the form of Coq exercise sessions, starting from existing works on Coq 29, 70, 41. This will require to develop close collaborations with the different communities of other proof assistants, especially those designed to be well-adapted to the formalisation of mathematics (Lean, Isabelle and Agda in particular). Understanding linguistic aspects of mathematical proofs will be a key to the success of our project.
Formalising inevitably leads to a shift of perspective on what a mathematical proof is. From a mathematical standpoint, it is conventional, even if developments are emerging, to be satisfied of the mere possibility of a formalisation (which is never even sketched out) typically in the set-theoretical formalism, and to focus on the construction of a natural language discourse that is precise enough to convince the reader. From the proof assistant standpoint, the seminal vision which initially aimed only at making effective this virtuality, tends to evolve. New lines of communication and convergence are emerging: a proof is no longer strictly made up of logical inference rules, such as introducing or eliminating a connective, but it is made of more abstract entities such as the use of a lemma, the replacement of equals by equals, reasoning by induction, simplifying a polynomial expression, decomposing a formula in atoms, etc. With the arrival of a new generation of interactive proof engines 77, 27, a proof is no longer seen strictly as a tree derivation, but as a graph whose nodes can be refined with a reasonable degree of freedom; moreover, the order in which these transformations are applied interactively appears in the “incremental” format of the machine maths document. This is in line with the historical evolution of proof methods in order to escape from the “low level” of logic and get closer to more abstract conceptual levels used by humans. Our investigations will follow this line as we plan to analyse the linguistic structure of mathematical texts, with the ambition to develop the levels of abstraction which would eventually allow a direct formal understanding of a mathematical text at the level of mathematical discourse in which it is expressed. Examples of the sorts of linguistic structures we plan to analyse are “reasoning by analogy”, reasoning modulo isomorphism, or even modulo inclusion, and of course reasoning modulo general equational theories in general.
On this natural (mathematical) language processing part, we plan a collaboration with Philippe de Groote of the Sémagramme team (Inria Nancy & Loria), to identify the necessary structures for a flexible formalisation of vernacular mathematical proofs in order to implement this linguistic structure within Coq, procedural and discursive in nature rather than tree-like as described above. One of the objectives will be to be able to formalise in a more transparent and direct way mathematical texts, such Bourbaki's Éléments de mathématiques.
Regarding the design and development of a general mathematical library, it is undoubtedly too early
to describe the directions we will take, but we have some ongoing discussions with Assia Mahboubi (Gallinette), Yves Bertot and Cyril Cohen (Scalp) around these essential questions.
We also wish to develop the team's scientific interactions with Michael Soegtrop (Apple) and we closely follow his projects on proving algorithms of symbolic computing, around constructive reals and the formal integrator Rubi.
We are also in contact with mathematicians from IMJ-PRG,
and in particular with Antoine Chambert-Loir, who expressed a keen interest in formalisation.
We want to develop fruitful discussions with the communities of other proof assistants, such as Agda, Isabelle or Lean.
Like ordinary categories, higher-dimensional categorical structures originate in algebraic topology. Indeed, fundamental -groupoid
In the last decades, the importance of higher-dimensional categories has grown fast, mainly with the new trend of categorification that currently touches algebra and the surrounding fields of mathematics. Categorification is an informal process that consists in the study of higher-dimensional versions of known algebraic objects (such as higher Lie algebras in mathematical physics 24) and/or of “weakened” versions of those objects, where equations hold only up to suitable equivalences (such as weak actions of monoids and groups in representation theory 49).
The categorification process has also reached logic, with the introduction of homotopy type theory. After a preliminary result that had identified categorical structures in type theory 64, it has been observed recently that the so-called “identity types” are naturally equiped with a structure of
Higher-dimensional categories are algebraic structures that contain, in essence, computational aspects. This has been recognised by Street 78, and independently by Burroni 36, when they have introduced the concept of computad or polygraph as combinatorial descriptions of higher categories. Those are directed presentations of higher-dimensional categories, generalising word and term rewriting systems.
In the recent years, the algebraic structure of polygraph has led to a new theory of rewriting, called higher-dimensional rewriting, as a unifying point of view for usual rewriting paradigms, namely abstract, word and term rewriting 66, 67, 59, 60, and beyond: Petri nets 62 and formal proofs of classical and linear logic have been expressed in this framework 61. Higher-dimensional rewriting has developed its own methods to analyse computational properties of polygraphs, using in particular algebraic tools such as derivations to prove termination, which in turn led to new tools for complexity analysis 32.
The application domains of the Picube team researchers range from the formalization of mathematical theories and computational systems using the Coq proof assistant to the design of programming languages with rich type systems and effects (stateful, concurrent, probabilistic) and the design and analysis of certified program transformations.
The environmental impact of the team is mainly two sorts:
Members of the Picube team are committed to decreasing the environmental impact of our research. In the IRIF lab environment, a working group investigates the footprint of our scientific community and its practices (notably numerous international conferences) and the potential medium and long-term evolution that can be made. Several members of the team and active contributors or interested followers of the WG. As an achievement of this working group, recommendations have been made at the IRIF level to encourage every lab member to travel by train rather than by plane when the travel duration is not significantly longer by train.
In the continuation of the work of the PiR2 team, the Picube research team wishes to contribute to the education of new generations of students taking the lead in proof assistant technology and formalisation of mathematics and computer science. We benefit from the fact that Picube is a joint project team with the IRIF lab of Université de Paris, within the Fondation des Sciences Mathématiques de Paris (FSMP) and with an active participation of its members to the Master Logique Mathématique et Fondements de l’Informatique (LMFI) and the Master Parisien de Recherche en Informatique (MPRI) both taught in the Bâtiment Sophie Germain where the IRIF lab is located. We believe that the development of a formal corpus of mathematics is a foundational challenge potentially as important as the Bourbaki enterprise initiated in the late 1930s.
Web site: http://www.coq.inria.fr.
Self-assessment:
Free Description: Coq is an interactive proof assistant based on the CIC (Calculus of (Co-)Inductive Constructions), extended with universe polymorphism. This type theory features inductive and co-inductive families, an impredicative sort and a hierarchy of predicative universes, making it a very expressive logic. The calculus allows to formalize both general mathematics and computer programs, ranging from theories of finite structures to abstract algebra and categories to programming language metatheory and compiler verification. Coq is organised as a (relatively small) kernel including efficient conversion tests on which are built a set of higher-level layers: a powerful proof engine and unification algorithm, various tactics/decision procedures, a transactional document model and, at the very top an integrated development environment (IDE).
Coq provides both a dependently-typed functional programming language and a logical formalism, which, altogether, support the formalisation of mathematical theories and the specification and certification of properties of programs. Coq also provides a large and extensible set of automatic or semi-automatic proof methods. Coq’s programs are extractible to OCaml, Haskell, Scheme, ...
Contact: Matthieu Sozeau
Participants: Yves Bertot, Frederic Besson, Tej Chajed, Cyril Cohen, Pierre Corbineau, Pierre Courtieu, Maxime Denes, Jim Fehrle, Julien Forest, Emilio Jesús Gallego Arias, Gaetan Gilbert, Georges Gonthier, Benjamin Grégoire, Jason Gross, Hugo Herbelin, Vincent Laporte, Olivier Laurent, Assia Mahboubi, Kenji Maillard, Érik Martin-Dorel, Guillaume Melquiond, Pierre-Marie Pédrot, Clément Pit-Claudel, Kazuhiko Sakaguchi, Vincent Semeria, Michael Soegtrop, Arnaud Spiwack, Matthieu Sozeau, Enrico Tassi, Laurent Théry, Anton Trunov, Li-Yao Xia, Theo Zimmermann.
Web site: https://coq.vercel.app.
Self-assessment:
Web site: https://github.com/ejgallego/coq-serapi.
Self-assessment:
Free Description: SerAPI is a library for machine-to-machine interaction with the Coq proof assistant, with particular emphasis on applications in IDEs, code analysis tools, and machine learning. SerAPI provides automatic serialization of Coq’s internal OCaml datatypes from/to JSON or S-expressions (sexps).
Contact: Emilio Jesus Gallego Arias
Participant: Emilio Jesus Gallego Arias, Thierry Martinez
Free Description: This software is a bot to help and automatize the development of the Coq proof assistant on the GitHub platform. It is written in OCaml and provides numerous features: synchronization between GitHub and GitLab to allow the use of GitLab for automatic testing (continuous integration), management of milestones on issues, management of the backporting process, merging of pull request upon request by maintainers, etc.
Most of the features are used only for the development of Coq, but the synchronization with GitLab feature is also used in dozens of independent projects.
Contact: Theo Zimmermann
Web site: https://github.com/ejgallego/coq-lsp.
Self-assessment:
Free Description: coq-lsp is a Language Server and Visual Studio Code extension for the Coq Proof Assistant. Experimental support for Vim and Neovim is also available in their own projects.
Contact: Emilio Jesus Gallego Arias
Participant: Ali Caglayan, Emilio J. Gallego Arias, Shachar Itzhaky
Web site: https://github.com/ejgallego/pycoq.
Self-assessment:
Free Description: PyCoq is a set of bindings and libraries allowing to interact with the Coq interactive proof assistant from inside Python 3.
Contact: Emilio Jesus Gallego Arias
Participant: Emilio Jesus Gallego Arias, Thierry Martinez
Saurin has completed the line of work aiming at building a full fledge circular proof theory with two new results:
The cut-elimination result for
Together with Bauer, Saurin finally completed the above result providing the first syntactic cut-elimination result for a (circular) proof system containing the full modal
Chardonnet, Saurin and Valiron developed a term calculus for a class of type isomorphisms for circular
Following up on their previous work with Das on the decision problems for various fragments of
In a collaboration with Aurore Alcolei and Luc Pellissier, published at MFPS 2023 5, Alexis Saurin internalized the notion of jumps of linear logic proof-nets (which can be used as an alternative to boxes) in a slight extension of MLL (multiplicative linear logic). Jumps, which have been extensively studied by Faggian and di Giamberardino (building on prior work by Curien and Faggian on L-nets), can express intermediate degrees of sequentialization between a sequent calculus proof and a fully desequentialized proof-net. The logical strength of jumps is analyzed by internalizing them in an extension of MLL where axioms on a specific formula introduce constraints on the possible sequentializations. The jumping formula needs to be treated non-linearly, which is done either axiomatically, or by embedding it in a very controlled fragment of multiplicative-exponential linear logic, uncovering the exponential logic of sequentialization.
Melliès and Zeilberger develop in 10 a categorical framework
based on non-symmetric operads (= multicategories) to describe context-free grammars
and establish the Chomsky-Schützenberger representation theorem for context-free languages.
In this approach, a context-free grammar is defined as a functor from an operad
freely generated by a species of production rules, to an operad of spliced words
introduced for the first time in this paper.
A notion of automaton on a category
Differential Linear Logic (DiLL) has been introduced by Ehrhard and Regnier in the mid 2000's. This extension of linear logic provides a new interpretation of the exponentials, turning them into a modality related to communication rather than to the sole resource replicability and erasing. DiLL also provided a new understanding of resource calculi and of intersection types, allowing to apply to programs and proofs an operation of approximation which is a syntactic version of the standard Taylor expansion of functions. Until 2021, it seemed that DiLL was doomed to feature a strong form of nondeterminism due to the interaction between differentiation and the structural rule of contraction (Leibniz rule), making it incompatible with stable or sequential denotational interpretations of programs. The article 3 presents Coherent Differentiation, a new denotational setting discovered in 2021, featuring differentiation operations while being compatible with stable and sequential interpretations. A syntactic account of this new approach to differentiation has also been proposed by the author in a subsequent paper.
The question of describing the causal structure of symbolic rewriting systems
such as the lambda-calculus is at the heart of an old and vibrant connection
established in the 1980s between proof theory and concurrency theory.
The idea is that the process of reducing beta-redexes generates
causal structures similar to what one finds in Petri nets
and process calculi such as CCS or the
In joint work with van Gool, Melliès and Moreau introduced in 4
a notion of profinite
The goal of the article 6 is to better understand Coq users and community, and to make informed decisions about the research and development challenges around a system such as Coq. A key point on the paper is the multidisciplinary research team, bringing together researchers from several areas in Computer and Social sciences. Inspired by previous Surveys in Coq, OCaml, and open source world, a survey was designed and ran for the Coq community totaling 109 questions, and obtaining 466 answers, to this date, the largest survey done on users of Interactive Theorem Provers. The data were analyzed using rigorous regression methods common on the social sciences.
The team has not undertaken any sustained transfer activities, however we have informal but regular scientific contacts with industrial users in several companies, such as Apple (Michael Soegtrop who works on the safety of cyber-physical systems is a regular Coq contributor), Tweag I/O (Arnaud Spiwack is a close collaborator), Nomadic labs and Tezos (in particular with Yann Regis-Gianas), OpenAI (with discussions and visits by Stanislas Polu, now working at Dust, a startup company which he co-founded).
Thomas Ehrhard chairs the french-italian GDRI on Linear Logic which is finishing this year. Paul-André Melliès and Alexis Saurin are also members of the GDRI.
Thomas Ehrhard and Paul-André Melliès are international members of the EPSRC project "Resources in Computation" chaired by Samson Abramsky (UCL) and Anuj Dawar (Cambridge).
Pierre-Louis Curien, Thomas Ehrhard, Emilio J. Gallego Arias, Hugo Herbelin, Paul-André Melliès and Alexis Saurin are members of the GDR Informatique Mathématique, in the LHC (Logique, Homotopie, Catégories) and Scalp (Structures formelles pour le calcul et les preuves) working groups. Alexis Saurin is coordinator of the Scalp working group (see website here).
Pierre-Louis Curien and Paul-André Melliès
are members of the GDR Homotopie, federating French researchers working on classical topics of algebraic topology and homological algebra, such as homotopy theory, group homology, K-theory, deformation theory, and on more recent interactions of topology with other themes, such as higher categories and theoretical computer science.
Kostia Chardonnet, Abhishek De, Thomas Ehrhard, Farzad Jafarrahmani, Hugo Herbelin, Paul-André Melliès, Daniela Petrisan and Alexis Saurin (coordinator) are members of the four year RECIPROG project. RECIPROG is an ANR collaborative project (a.k.a. PRC) started in the fall 2021-2022 and running till the end of 2025. ReCiProg aims at extending the proofs-as-programs correspondence to recursive programs and circular proofs for logic and type systems using induction and coinduction. The project aims at contributing both to the necessary theoretical foundations of circular proofs and to the software development allowing to enhance the use of coinductive types and coinductive reasoning in the Coq proof assistant: such coinductive types present in the current state of the art serious defects that the project will aim at solving.
The project is coordinated by Alexis Saurin and has four sites: IRIF in Paris where the team Picube is located, LIP in Lyon, LIS in Marseille and LS2N in Nantes.
Hugo Herbelin participates in the Inria Challenge LiberAbaci. LiberAbaci
is a collaborative project aimed at improving the accessibility of the Coq
interactive proof system for an audience of mathematics students in the early
academic years. The lead is Yves Bertot and the involved teams are: Cambium (Paris),
Camus (Strasbourg), Gallinette (Nantes) PiCube (Paris), Spades
(Grenoble), Stamp (Sophia Antipolis), Toccata (Saclay), LIPN (Laboratoire d'Informatique de Paris Nord).
In collaboration with Riccardo Brasca (coordinator) and Antoine Chambert-Loir, two mathematicians specialists in number theory working at the Institut de Mathématiques de Jussieu Paris Rive Gauche (IMJ-PRG), Hugo Herbelin (coordinator), Pierre Letouzey, Paul-André Melliès and Alexis Saurin initiated and launched an Emergence Recherche project of the Université Paris Cité, APRAPRAM. The aim of the project is to contribute to a formalization of Fermat's last theorem in the special case of regular primes, targeting a cross-fertilization between the Lean and the Coq/Rocq communities.
Alexis Saurin is member of the organizing committee of the annual Scalp meeting, to be held at CIRM in February 2023.
Emilio Gallego, Hugo Herbelin, Paul-André Melliès and Alexis Saurin, together with Chantal Keller and Marie Kerjean, are members of the organizing committee of the thematic day on proof assistants to be held at JNIM 2024 (Journées nationales du GDR-IM) end of March 2024.
Pierre-Louis Curien is the editor-in-chief of the journal Mathematical Structures in Computer Science. Thomas Ehrhard is an editor of this journal, and Paul-André Melliès is editor of the journal Theoretical Computer Science. Paul-André Melliès is a member of the editorial board of the journal Theoretical Computer Science.
Paul-André Melliès gave an invited lecture in Nice as part of the "Journées Homotopiques" on the occasion of Clemens Berger's 60th birthday.
Paul-André Melliès gave an invited lecture at the "Resources and Co-Resources" workshop at the University of Cambridge from 17 to 19 July 2023.
Alexis Saurin gave an invited tutorial at TLLA 2023, 7th International Workshop on Trends in Linear Logic and Applications, Roma, 1 and 2 July 2023.
Alexis Saurin is co-chair of the Scalp working group in GDR-IM (GT Scalp).
Pierre-Louis Curien taught a course on homotopic algebra and higher categories in LMFI (Logique mathématiques et fondements de l'informatique) second-year Master, Université Paris Cité.
Pierre Letouzey taught a course on Coq in LMFI (Logique mathématiques et fondements de l'informatique) second-year Master, Université Paris Cité.
Alexis Saurin taught a lecture on Second-order quantification and fixed-points in logic in LMFI (Logique mathématiques et fondements de l'informatique) second-year master, Université Paris Cité.
Hugo Herbelin and Paul-André Melliès taught a course on homotopy type theory in LMFI (Logique mathématiques et fondements de l'informatique) second-year Master, Université Paris Cité.
Together with Michele Pagani (IRIF), Paul-André Melliès and Thomas Ehrhard taught a course on denotational semantics and linear logic at MPRI (Master Parisien de Recherche en Informatique) second-year Master, Université Paris Cité.
Paul-André Melliès taught a course on lambda-calculus and categories at MPRI (Master Parisien de Recherche en Informatique) first-year Master, at ENS Paris (Ecole Normale Supérieure).