The general objective of the Toccata project is to promote formal
specification and computer-assisted proof in the development of
software that requires high assurance in terms of safety and
correctness with respect to its intended behavior. Such
safety-critical software appears in many application domains like
transportation (e.g. aviation, aerospace, railway, automotive),
communication (e.g. internet, smartphones), health devices, data
management on clouds (confidentialty issues), etc. The number of tasks performed by software
is quickly increasing, together with the number of lines of code
involved. Given the need of high assurance of safety in the functional
behavior of such applications, the need for automated (in the sense
computer-assisted) methods and techniques to bring guarantee of safety
became a major challenge. In the past and at present, the most widely
used approach to check safety of software is to apply heavy test
campaigns, which take a large part of the costs of software
development. Yet these campaigns cannot ensure that all the bugs are caught, and
remaining bugs may have catastrophic consequences.
Generally speaking, software verification approaches pursue three goals: (1) verification should be sound, in the sense that no bugs should be missed, (2) verification should not produce false alarms, or as few as possible, (3) it should be as automatic as possible. Reaching all three goals at the same time is a challenge. A large class of approaches emphasizes goals (2) and (3): testing, run-time verification, symbolic execution, model checking, etc. Static analysis, such as abstract interpretation, emphasizes goals (1) and (3). Deductive verification emphasizes (1) and (2). The Toccata project is mainly interested in exploring the deductive verification approach, although we also combine with the other techniques occasionally.
In the past decade, significant progress has been made in the
domain of deductive program verification. This is emphasized by some
success stories of application of these techniques on industrial-scale
software. For example, the Atelier B system was used to develop
part of the embedded software of the Paris metro line
14 41 and other railway-related systems; a
formally proved C compiler was developed using the Coq proof
assistant 61; the L4-verified project developed a
formally verified micro-kernel with high security guarantees, using
analysis tools on top of the Isabelle/HOL proof
assistant 59. A bug in the JDK implementation of
TimSort was discovered using the KeY
environment 58 and a fixed version was
proved sound. Another sign of recent progress is the emergence of
deductive verification competitions (e.g.
VerifyThis 42). Finally, recent trends in the
industrial practice for development of critical software is to require
more and more guarantees of safety, e.g. the DO-178C standard for
developing avionics software adds to the former DO-178B the use of
formal models and formal methods. It also emphasizes the need for
certification of the analysis tools involved in the process.
There are two main families of approaches for deductive
verification. Methods in the first family build on top of mathematical
proof assistants (e.g. Coq, Isabelle) in which both the model and the
program are encoded; the proof that the program meets its
specification is typically conducted in an interactive way using the
underlying proof construction engine. Methods from the second family
proceed by the design of standalone tools taking as input a program in
a particular programming language (e.g. C, Java) specified with a
dedicated annotation language (e.g. ACSL 40,
JML 49) and automatically producing a set of
mathematical formulas (the verification conditions) which are
typically proved using automatic provers (e.g. Z3 63,
Alt-Ergo 50, CVC5 39).
The first family of approaches usually offers a smaller Trusted Code Base
(TCB) than the second, but also demands more work to perform the proofs (because
of their interactive nature) and makes them less easy to adopt by
industry. Moreover, they generally do not allow to directly analyze a program
written in a mainstream programming language like Java or C. The second kind of
approaches has benefited in the past years from the tremendous progress made in
SAT and SMT solving techniques, allowing more impact on industrial practices,
but suffers from a lower level of trust: in all parts of the proof chain (the
model of the input programming language, the VC generator, the back-end
automatic prover), potential errors may appear, compromising the guarantee
offered. Moreover, while these approaches are applied to mainstream languages,
they usually support only a subset of their features.
One of our original skills is the ability to conduct proofs by using automatic provers and proof assistants at the same time, depending on the difficulty of the program, and specifically the difficulty of each particular verification condition. We thus believe that we are in a good position to propose a bridge between the two families of approaches of deductive verification presented above. Establishing this bridge is one of the goals of the Toccata project: we want to provide methods and tools for deductive program verification that can offer both a high amount of proof automation and a high guarantee of validity.
In industrial applications, numerical calculations are very common (e.g. control software in transportation). Typically they involve floating-point numbers. Some of the members of Toccata have an internationally recognized expertise on deductive program verification involving floating-point computations. Our past work includes a new approach for proving behavioral properties of numerical C programs using Frama-C/Jessie 35, various examples of applications of that approach 47, the use of the Gappa solver for proving numerical algorithms 54, an approach to take architectures and compilers into account when dealing with floating-point programs 48, 65. We also contributed to the Handbook of Floating-Point Arithmetic 64. A representative case study is the analysis and the proof of both the method error and the rounding error of a numerical analysis program solving the one-dimension acoustic wave equation 4544. Our experience led us to a conclusion that verification of numerical programs can benefit a lot from combining automatic and interactive theorem proving 46, 47, 56, 57. Verification of numerical programs is another main axis of Toccata.
Let us conclude with more general considerations: we want to keep on with general audience actions (see Section 11.3), and industrial transfer through sustained long-term collaboration with industrial partners (Section 4). Our scientific programme detailed below is structured into the following four axes.
This axis covers the fundational studies we pursue regarding deductive verification. A non-exhaustive list of subjects we want to address is as follows.
A significant part of the work achieved in this axis is related to the
Why3 toolbox and its ecosystem, displayed on Figure 1.
The red background boxes represent tools that we develop ourselves,
whereas blue background ones are developed by others. SPARK2014 is
developed by AdaCore. Frama-C and Wp are developed by CEA-list and
directly produce logical formulas to be passed to
provers. TIS-Analyzer is developed by TrustInSoft and J
This axis concerns specifically the techniques for reasoning on programs where memory aliasing is the central issue. It covers the methods based on type-based alias analysis and related memory models, on specific program logics such as separation logics, and extended model-checking. It concerns the application on analysis of C or C++ codes, on Ada codes involving pointers, but also concurrent programs in general. The main topics are:
This axis, which bridges the domains of computer arithmetic and of formal verification, is a major originality of Toccata. The main topics are as follows.
Boldo and Melquiond are authors of a reference book 4 on the formal verification of numerical programs.
The general goal of this axis, which was a new one proposed in 2019, was to encourage spreading of deductive verification through actions showing how our methods and tools can be used on programs that we develop ourselves. Since this axis is dedicated to applications in a general manner, positioning barely makes sense since a vast majority of research groups in computer science in the world would claim to conduct case studies and large-scale applications.
Representative of these significant case studies are the automated analysis of Debian packages installation 1 and the automated analysis of Ladder programs 2.
The application domains we target involve safety-critical software, that is where a high-level guarantee of soundness of functional execution of the software is wanted. Currently our industrial collaborations or impact mainly belong to the domain of transportation: aerospace, aviation, railway, automotive.
Generally speaking, we believe that our increasing industrial impact is a representative success for our general goal of spreading deductive verification methods to a larger audience, and we are firmly engaged into continuing such kind of actions in the next years.
We believe our impact is not limited to industrial actions per se.
A first point is that during the years, the young students that we train, either as a PhD position or a temporary engineer positions, easily got positions in private companies. Indeed we believe we can say that we contributed to the creation of jobs in several companies.
Another important part of our social impact is our work with high school students. With new curricula including more computer science than ever before, it was important to provide good reference books. With this in mind, we have contributed three books aimed at high school and preparatory school students 37, 36, 38.
The impact is not limited to books: we also helped a teacher to design a lesson to learn the basic notions of program verification (say: loop invariants) using the Why3 tool (article IREMI). We are also part each year of stands at “Fête de la science” in November or special events towards girls. We also often go to (high) schools for presenting either our job or our research (except during the Covid pandemic).
The social impact in national education is finally made highly evident by our
implication in the organization of the new agrégation d'informatique
which is in charge to select and recruit the best high-level teachers for the
new programmes.
Our research activities make use of standard computers for developing
software and developing formal proofs. We have no use of specific
large size computing resources. Though, we are making use of external
services for continuous integration. A continuous integration
methodology for mature software like Why3 is indeed mandatory for
ensuring a safe software engineering process for maintenance and
evolution. We make the necessary efforts to keep the energy
consumption of such a continuous integration process as low as
possible.
Ensuring the reproducibility of proofs in formal verification is essential. It is thus mandatory to replay such proofs regularly to make sure that our changes in our software do not loose existing proofs. For example, we need to make sure that the case studies in formal verification that we present in our gallery are reproducible. We also make the necessary efforts to keep the energy consumption for replaying proofs low, by doing it only when necessary.
As widely accepted nowadays, the major sources of environmental impact of research is travel to international conferences by plane, and renewal of electronic devices. The number of travels we made in 2022 remained very low with respect to previous years, of course because of the Covid pandemic, and the fact that many conferences were now proposed online participation. We intend to continue limiting the environmental impact of our travels. Concerning renewal of electronic devices, that is mainly laptops and monitors, we have always been careful on keeping them usable for as long time as possible.
Our research results aims at improving the quality of software, in particular in mission-critical contexts. As such, making software safer is likely to reduce the necessity for maintenance operations and thus reducing energy costs.
Our efforts are mostly towards ensuring the safety of functional behavior of software, but we also increasingly consider the verification of their time or memory consumption. Reducing those would naturally induce a reduction in energy consumption.
Our research never involve any processing of personal data, and
consequently we have no concern about preserving individual privacy,
and no concern with respect to the RGPD (Règlement Général sur
la Protection des Données).
Recently, S. Boldo was in the program committee of the first PROPL workshop (Programming for the Planet ) to see how we may help topics such as climate analysis, modelling, forecasting, policy, and diplomacy.
CoqInterval is a library for the proof assistant Coq.
It provides several tactics for proving theorems on enclosures of real-valued expressions. The proofs are performed by an interval kernel which relies on a computable formalization of floating-point arithmetic in Coq.
The Marelle team developed a formalization of rigorous polynomial approximation using Taylor models in Coq. In 2014, this library has been included in CoqInterval.
The Flocq library for the Coq proof assistant is a comprehensive formalization of floating-point arithmetic: core definitions, axiomatic and computational rounding operations, high-level properties. It provides a framework for developers to formally verify numerical applications.
Flocq is currently used by the CompCert verified compiler to support floating-point computations.
Creusot is a tool for deductive verification of Rust code. It allows you to annotate your code with specifications, invariants and assertions and then verify them formally and automatically, proving, mathematically, that your code satisfies your specifications.
Creusot works by translating Rust code to WhyML, the verification and specification language of Why3. Users can then leverage the full power of Why3 to (semi)-automatically discharge the verification conditions.
The formalization in Coq of simplicial Lagrange finite elements is almost complete. This include the formalizations of the definitions and main properties of monomials, their representation using multi-indices, Lagrange polynomials, the vector space of polynomials of given maximum degree (about 6 kloc). This also includes algebraic complements on the formalization of the definitions and main properties of operators on finite families of any type, the specific cases of abelian monoids (sum), vector spaces (linear combination), and affine spaces (affine combination, barycenter, affine mapping), sub-algebraic structures, and basics of finite dimension linear algebra (about 22 kloc). A new version (2.0) of the opam package will be available soon, and a paper will follow.
We have also contributed to the Coquelicot library by adding the algebraic structure of abelian monoid, which is now the base of the hierarchy of canonical structures of the library.
The use of data in the Toccata is quite simple and perfectly open. First of all, we never make use of any personal data, so we have no issue in being conforming to European rules such as RGPD.
Our data is in fact mainly made of programs, but also and importantly
equipped with formal specifications. These specifications are often
completed with additional formal annotations in the code itself so as
the make the code automatically provable conforming to its
specifications. The most important kind of such internal annotations
are the loop invariants. Exposing such loop invariants is always a
crucial information to make the proofs automatic and reproducible. One
may even say that loop invariants are the central arguments for the
correctness of an algorithm. Given the importance of such data for
reproducibility of proofs, we decided to make it available
openly. This is why we decided to build a
gallery
of verified programs that is augmented regularly.
Other similar anontated programs are part of the tests suites of our tools, and are typically rechecked regularly in continuous integration processes. This for example the case for Why3 and Creusot. Such a practice is crucial to maintain the reproducibility of proofs in a long term, when the tools themselves evolve.
A representative fundational work is those of Balabonski, Lanco,
and Melquiond who devised a call-by-need lambda-calculus enabling
strong reduction (i.e. reduction inside the body of
abstractions) and guarantees that arguments are only evaluated if
needed and at most once 1160.
This calculus uses explicit substitutions and subsumes the existing
strong-call-by-need strategy, but allows for more reduction sequences,
and often shorter ones, while preserving the neededness. The calculus
is strongly normalizing. Moreover, by adding some restrictions to it, the
calculus gains the diamond property and only performs reduction
sequences of minimal length, which makes it systematically better than
the existing strategies. The Abella proof assistant has been used to
formalize part of this calculus.
Continuation-passing style allows us to devise an extremely economical abstract syntax for a generic algorithmic language. This syntax is flexible enough to naturally express conditionals, loops, (higher-order) function calls, and exception handling. It is type-agnostic and state-agnostic, which means that we can combine it with a wide range of type and effect systems. Paskevich 31 shows how programs written in the continuation-passing style can be augmented in a natural way with specification annotations, ghost code, and side-effect discipline. He defines the rules of verification condition generation for this syntax, and shows that the resulting formulas are nearly identical to what traditional approaches, like the weakest precondition calculus, produce for the equivalent algorithmic constructions. This amounts to a minimalistic yet versatile abstract syntax for annotated programs for which one can compute verification conditions without sacrificing their size, legibility, and amenability to automated proof, compared to more traditional methods. This makes it an excellent candidate for internal code representation in program verification tools, a subject of the on-going PhD thesis of P. Patault.
The discovery of invariants is another important topic. A fully automatic generation of invariants was studied in collaboration with an industrial partner: we devised an original abstract interpretation based approach using a domain of parametrized binary decision diagrams 27.
ACSL, short for ANSI/ISO C Specification Language, is meant to express precisely and unambiguously the expected behavior of a piece of C code. It plays a central role in Frama-C, as nearly all plug-ins eventually manipulate ACSL specifications, either to generate properties that are to be verified, or to assess that the code is conforming to these specifications. It is thus very important to have a clear view of ACSL's semantics in order to be sure that what you check with Frama-C is really what you mean. Marché contributed to a chapter 28 of the Frama-C book, describing the language in an agnostic way, independently of the various verification plug-ins that are implemented in the Frama-C platform. It contains many examples and exercises that introduce the main features of the language and insists on the most common pitfalls that users, even experienced ones, may encounter.
One of the major success of
Toccata during the last years is represented by the results obtained
concerning the verification of Rust programs. Rust is a fairly recent
programming language for system programming, bringing static
guarantees of memory safety through a strong ownership
policy. This feature opens promising advances for deductive
verification of Rust code. The project underlying the PhD thesis of
Denis 51, supervised by Jourdan and Marché, is to
propose techniques for the verification of Rust program, using a
translation to a purely-functional language. The challenge of this
translation is the handling of mutable borrows: pointers which control
of aliasing in a region of memory. To overcome this, we used a
technique inspired by prophecy variables to predict the final values
of borrows 52. This method is implemented in a
standalone tool called Creusot 53. The
specification language of Creusot features the notion of prophecy
mentioned above, which is central for the specification of behavior of
programs performing memory mutation. Prophecies also permit efficient
automated reasoning for verifying about such programs. Moreover, Rust
provides advanced abstraction features based on a notion of
traits, extensively used in the standard library and in user
code. The support for traits is another main feature of Creusot,
because it is at the heart of its approach, in particular for
providing complex abstraction of the functional behavior of programs
53. An important step to take further in the
applicability of Creusot on a wide variety of Rust code is to support
iterators, which are ubiquitous and in fact idiomatic in Rust
programming (for example, every for loop is in fact
internally desugared into an iterator). Denis and
Jourdan 20 proposed a new approach to simplify
the specifications of Rust code in presence of iterators, and to also
make the proofs more automatic.
Paskevich and Filliâtre 26 proposed an approach that helps to improve automation of proofs for certain classes of pointer-manipulating programs. It consists in mapping a recursive data structure onto a numerical domain, in such a way that ownership and separation properties can be expressed in terms of simple arithmetic inequalities. In addition to making the proof simpler, this provides for a clearer and more natural specification.
Ownership can also be used to reason about resources other than program memory.
Guéneau, Jourdan et al. 24 present formal reasoning
rules for verifying amortized complexity bounds in a language with
thunks. Thunks can be used to construct persistent data structures with good
amortized complexity, by suspending expensive computations and memoizing their
result. Based on the notion of time credits and debits, this work
presents a complete machine-checked reconstruction of Okasaki's reasoning rules
on thunks in a rich separation logic with time credits, and demonstrates their
applicability by verifying several of Okasaki's data structures.
Ability to reason about ownership is also fertile ground for designing reasoning
principles that capture powerful semantic properties of programs. In particular,
Guéneau et al. 25 show that it is possible to capture
well-bracketedness in a Hoare-style program logic based on separation
logic, providing proof rules to show correctness of well-bracketed programs both
directly and also through defining unary and binary logical relations models
based on this program logic.
Most of the existing verification tools and systems focus on programs that are
written in a single programming language. In practice, however, programs are
often composed of components written in different programming languages,
interacting through a foreign function interface (FFI).
Guéneau et al. 22 develop a novel multi-language program
verification system, dubbed Melocoton, for reasoning about OCaml, C, and their interactions through
the OCaml FFI.
Melocoton consists of the first formal semantics of (a large subset of) the
OCaml FFI—previously only described in prose in the OCaml manual—as well as
the first program logic to reason about the interactions of programs components
written in OCaml and C.
The Melocoton program logic is based on separation logic and expressive enough
to express fine-grained transfers of ownership between the different languages.
It has been fully mechanized in Coq on top of the Iris separation
logic framework.
A capability machine is a type of CPU allowing fine-grained privilege
separation using capabilities, machine words that represent
certain kinds of authority. Guéneau et
al. 13 present a mathematical model and
accompanying proof methods that can be used for formal verification of
functional correctness of programs running on a capability machine,
even when they invoke and are invoked by unknown (and possibly
malicious) code. They use a program logic called Cerise for reasoning
about known code, and an associated logical relation, for reasoning
about unknown code. The logical relation formally captures the
capability safety guarantees provided by the capability machine. The
Cerise program logic, logical relation, and all the examples
considered in the paper have been mechanized using the Iris program
logic framework in the Coq proof assistant. In subsequent work, they
show that this approach enables the formal verification of full-system
security properties under multiple attacker models: different security
objectives of the full system can be verified under a different choice
of trust boundary (i.e. under a different attacker model)
69. The proposed verification approach
is modular, and is robust: code outside the trust boundary for
a given security objective can be arbitrary, unverified
attacker-provided code.
Our work regarding concurrent programs is mostly represented by new methods based on model-checking, and implemented in the Cubicle tool. The Model Checking Modulo Theories (MCMT) framework is a powerful model checking technique for verifying safety properties of parameterized transition systems. In MCMT, logical formulas are used to represent both transitions and sets of states and safety properties are verified by an SMT-based backward reachability analysis. To be fully automated, the class of formulas handled in MCMT is restricted to cubes, i.e. existentially quantified conjunction of literals. While being very expressive, cubes cannot define properties with a global termination condition, usually described by a universally quantified formula. Conchon and Korneva 19 presented the Cubicle Fuzzy Loop (CFL), a fuzzing-based extension for Cubicle. To prove safety, Cubicle generates invariants, making use of forward exploration strategies like BFS or DFS on finite model instances. However, these standard algorithms are quickly faced with the state explosion problem due to Cubicle’s purely nondeterministic semantics. This causes them to struggle at discovering critical states, hindering invariant generation. CFL replaces this approach with a powerful DFS-like algorithm inspired by fuzzing. Cubicle’s purely nondeterministic execution loop is modified to provide feedback on newly discovered states and visited transitions. This feedback is used by CFL to construct schedulers that guide the model exploration. Not only does this provide Cubicle with a bigger variety of states for generating invariants, it also quickly identifies unsafe models. As a bonus, it adds testing capabilities to Cubicle, such as the ability to detect deadlocks. The first experiments yielded promising results. CFL effectively allows Cubicle to generate crucial invariants, useful to handle hierarchical systems, while also being able to trap bad states and deadlocks in hard-to-reach areas of such models.
We have a long tradition of study of various subtle algorithms involving numerical computations, and verified properties regarding accuracy in particular when using floating-point numbers. A set of numerical programs that we studied this year is related to combination of exponential and logarithm functions, Bonnot et al. 30 provide certified bounds on the accuracy of the log-sum-exp function known in the context of Machine Learning 62. Writing a formal proof offers the highest possible confidence in the correctness of a mathematical library. This comes at a large cost though, since formal proofs require taking into account all the details, even the seemingly insignificant ones, which makes them tedious to write. This issue is compounded by the fact that the objects whose properties we need to verify (floating-point numbers) are not the ones we would like to reason about (real numbers and integers). Geneau, Melquiond and Faissole 21 explore some ways of reducing the overhead of formal proofs in the setting of mathematical libraries, so as to let the user focus on the details that really matter.
Performing a formal verification inside a proof system such as Coq might be a costly endeavor. In some cases, it might be much more efficient to turn the whole process of proof generation and proof checking into the evaluation of a boolean formula, accompanied with a proof that, if this formula evaluates to true, then the original property holds. This approach has long been used for proofs that involve computations on large integers. Martin-Dorel, Melquiond and Roux 14 have shown that computational reflection can also be achieved using floating-point arithmetic, despite the inherent round-off errors, thus leveraging the large computing power of the floating-point units for formal proofs.
Boldo et al. published a survey on floating-point arithmetic 12 as an open-access journal paper in Acta Numerica in order to spread the knowledge on computer arithmetic.
The correctness of programs solving partial differential equations may rely on
mathematics yet unformalized, such as Sobolev spaces. Boldo et
al. 43 therefore formalized the mathematical concept of
Lebesgue integration and the associated results in Coq (
In formal systems combining dependent types and inductive types, such as Coq, non-terminating programs are frowned upon. They can indeed be made to return impossible results, thus endangering the consistency of the system, although the transient usage of a non-terminating Y combinator, typically for searching witnesses, is safe. To avoid this issue, the definition of a recursive function is allowed only if one of its arguments is of an inductive type and any recursive call is performed on a syntactically smaller argument. If there is no such argument, the user has to artificially add one, e.g., an accessibility property. Free monads can still be used to address general recursion and elegant methods make possible to extract partial functions from sophisticated recursive schemes. The latter yet rely on an inductive characterization of the domain of a function, and of its computational graph, which in turn might require a substantial effort of specification and proof. This leads to a rather frustrating situation when computations are involved. Indeed, the user first has to formally prove that the function will terminate, then the computation can be performed, and finally a result is obtained (assuming the user waited long enough). But since the computation did terminate, what was the point of proving that it would terminate? Mahboubi and Melquiond 23 investigated how users of proof assistants based on variants of the Calculus of Inductive Constructions could benefit from manifestly terminating computations.
Our work on OCaml programs is not limited to purely static
verification: we worked on runtime assertion checking for OCaml.
In behavioural specifications of imperative languages, postconditions
may refer to the prestate of the function, usually with an
old operator. Therefore, code performing runtime
verification has to record prestate values required to evaluate the
postconditions, typically by copying part of the memory state, which
causes severe verification overhead, both in memory and CPU time.
Filliâtre and Pascutto 55, 67 consider the
problem of efficiently capturing prestates in the context of
Ortac, a
runtime assertion checking tool for OCaml. Their contribution is a
postcondition transformation that reduces the subset of the prestate
to copy. They formalize this transformation, and they provide proof
that it is sound and improves the performance of the instrumented
programs. They illustrate the benefits of this approach with a maze
generator. Benchmarks show that unoptimized instrumentation is not
practicable, while their transformation restores performances similar
to the program without any runtime check.
When testing a library, developers typically first have to capture the semantics they want to check. They then write the code implementing these tests and find relevant test cases that expose possible misbehaviours. In this work, Osborne and Pascutto 66, 67 present a tool that automatically takes care of these last two steps by automatically generating fuzz testing suites from OCaml interfaces annotated with formal behavioral specifications. They also show some ongoing experiments on the capabilities and limitations of fuzzing applied to real-world libraries.
As part of his CIFRE PhD with OCamlPro, Léo Andrès formalizes a
compilation scheme from OCaml to WebAssembly. This on-going work
already validated several Wasm extensions 17.
A by-product of the thesis is the implementation of a new, efficient
interpreter for Wasm,
owi. Léo
collaborates with José Fragoso Santos and Filipe Marques (Universidade
de Lisboa, Portugal), who are using owi for concolic
execution of WebAssembly programs.
We have bilateral contracts which are closely related to a joint effort called the ProofInUse consortium. The objective of ProofInUse is to provide verification tools, based on mathematical proof, to industry users. These tools are aimed at replacing or complementing the existing test activities, whilst reducing costs.
This consortium is a follow-up of the former LabCom ProofInUse between Toccata and the SME AdaCore, funded by the ANR programme “Laboratoires communs”, from April 2014 to March 2017.
This collaboration is a joint effort of the Inria project-team Toccata and the AdaCore company which provides development tools for the Ada programming language. It is funded by a 5-year bilateral contract from Jan 2019 to Dec 2023.
The SME AdaCore is a software publisher specializing in providing software development tools for critical systems. A previous successful collaboration between Toccata and AdaCore enabled Why3 technology to be put into the heart of the AdaCore-developed SPARK technology.
The objective of ProofInUse-AdaCore is to significantly increase the capabilities and performances of the Spark/Ada verification environment proposed by AdaCore. It aims at integration of verification techniques at the state-of-the-art of academic research, via the generic environment Why3 for deductive program verification developed by Toccata.
This bilateral contract is part of the ProofInUse effort. This collaboration joins efforts of the Inria project-team Toccata and the company Mitsubishi Electric R&D (MERCE) in Rennes. It is funded by a bilateral contract of 3 years and 6 months from Nov 2019 to April 2023.
MERCE has strong and recognized skills in the field of formal methods. In the industrial context of the Mitsubishi Electric Group, MERCE has acquired knowledge of the specific needs of the development processes and meets the needs of the group in different areas of application by providing automatic verification and demonstration tools adapted to the problems encountered.
The objective of ProofInUse-MERCE is to significantly improve on-going MERCE tools regarding the verification of Programmable Logic Controllers and also regarding the verification of numerical C codes.
This bilateral contract is part of the ProofInUse effort. This collaboration joins efforts of the Inria project-team Toccata and the company TrustInSoft in Paris. It is funded by a bilateral contract of 24 months from Dec 2020 to Nov 2022.
TrustInSoft is an SME that offers the TIS-Analyzer environment for analysis of safety and security properties of source codes written in C and C++ languages. A version of TIS-Analyzer is available online, under the name TaaS (TrustInSoft as a Service).
The objective of ProofInUse-TrustInSoft is to integrate Deductive Verification in the platform TIS-Analyzer, under the form of a new plug-in called J-cube. One specific interest resides in the generation of counterexample to help the user in case of proof failure.
Toccata and the company TrustInSoft set up a research action in the context of the national “plan de relance”. It is funded for 24 months from January 2022 to December 2023. The funding covers the leave of R. Rieu-Helft for 80% time as an invited researcher in Toccata.
The objective of this action is to extend the ProofInUse-TrustInSoft collaboration towards two axes: a refinement of the J-cube memory model incorporating a static separation analysis, and the support of the C++ language.
Clément Pascutto started a CIFRE PhD in June 2020, under the supervision of Jean-Christophe Filliâtre (at Toccata) and Thomas Gazagnaire (at Tarides). The subject of the PhD is the dynamic and deductive verification of OCaml programs and its application to distributed data structures.
Léo Andrès started a CIFRE PhD in October 2021, under the supervision of Jean-Christophe Filliâtre (at Toccata) and Pierre Chambart and Vincent Laviron (at OCamlPro). The subject of the PhD is the design, formalization, and implementation of a garbage collector for WebAssembly.
EMC2 project on cordis.europa.eu
Molecular simulation has become an instrumental tool in chemistry, condensed matter physics, molecular biology, materials science, and nanosciences. It will allow to propose de novo design of e.g. new drugs or materials provided that the efficiency of underlying software is accelerated by several orders of magnitude.
The ambition of the EMC2 project is to achieve scientific breakthroughs in this field by gathering the expertise of a multidisciplinary community at the interfaces of four disciplines: mathematics, chemistry, physics, and computer science. It is motivated by the twofold observation that, i) building upon our collaborative work, we have recently been able to gain efficiency factors of up to 3 orders of magnitude for polarizable molecular dynamics in solution of multi-million atom systems, but this is not enough since ii) even larger or more complex systems of major practical interest (such as solvated biosystems or molecules with strongly-correlated electrons) are currently mostly intractable in reasonable clock time. The only way to further improve the efficiency of the solvers, while preserving accuracy, is to develop physically and chemically sound models, mathematically certified and numerically efficient algorithms, and implement them in a robust and scalable way on various architectures (from standard academic or industrial clusters to emerging heterogeneous and exascale architectures).
EMC2 has no equivalent in the world: there is nowhere such a critical number of interdisciplinary researchers already collaborating with the required track records to address this challenge. Under the leadership of the 4 PIs, supported by highly recognized teams from three major institutions in the Paris area, EMC2 will develop disruptive methodological approaches and publicly available simulation tools, and apply them to challenging molecular systems. The project will strongly strengthen the local teams and their synergy enabling decisive progress in the field.
Using computers to formulate conjectures and consolidate proof steps pervades all mathematics fields, even the most abstract. Most computer proofs are produced by symbolic computations, using computer algebra systems. However, these systems suffer from severe, intrinsic flaws, rendering computational correction and verification challenging. The FRESCO project aims to shed light on whether computer algebra could be both reliable and fast. Researchers will disrupt the architecture of proof assistants, which serve as the best tools for representing mathematics in silico, enriching their programming features while preserving their compatibility with their logical foundations. They will also design novel mathematical software that should feature a high-level, performance-oriented programming environment for writing efficient code to boost computational mathematics.
The last twenty years have seen the advent of computer-aided proofs in mathematics and this trend is getting more and more important. They request various levels of numerical safety, from fast and stable computations to formal proofs of the computations. Hovewer, the necessary tools and routines are usually ad hoc, sometimes unavailable, or inexistent. On a complementary perspective, numerical safety is also critical for complex guidance and control algorithms, in the context of increased satellite autonomy. We plan to design a whole set of theorems, algorithms and software developments, that will allow one to study a computational problem on all (or any) of the desired levels of numerical rigor. Key developments include fast and certified spectral methods and polynomial arithmetic, with subsequent formal verifications. There will be a strong feedback between the development of our tools and the applications that motivate it.
The project led by École Normale Supérieure de Lyon (LIP) has started in February 2021 and lasts for 4 years. Partners: Inria (teams Aric, Galinette, Lfant, Marelle, Toccata), École Polytechnique (LIX), Sorbonne Université (LIP6), Université Sorbonne Paris Nord (LIPN), CNRS (LAAS).
A specification language extends a programming language by allowing code and specifications to be written in a single document. Examples include SparkAda, JML, and ACSL, which extend Ada, Java, and C with syntax for specifications.
By offering a specification language to programmers, one
encourages them to document, test, and verify their code as they
write it, not as a separate step that is too easily postponed.
From a technical point of view, the presence of specifications makes
it possible to test or verify each module independently and is the
key to scalability.
From a pragmatic point of view, embedding specifications in the code
allows them to be automatically distributed (via a package
management system) to every programmer; this is the key to
practical adoption.
The GOSPEL project proposes to develop Gospel,
a specification language that extends the programming language
OCaml; to develop an ecosystem of tools based on Gospel; and to
demonstrate and validate these tools via several case studies.
The project led by Inria Paris has started in October 2022 and lasts for 4 years. Partners: Inria Paris (team Cambium), Université Paris-Saclay (LMF), Tarides, Nomadic Labs.
The SecureVal project aims to design new tools, benefiting from new digital technologies, to verify the absence of hardware and software vulnerabilities, and carry out the proof of compliance required.
In order to deal effectively with modern digital systems, code analysis techniques, which originated in the world of critical systems, must be overhauled to adapt to the objectives of security assessments and to scale up to complex systems, combining dedicated functionalities and third-party libraries. For example, the design of new fault models, the support of emerging languages, the visualization of formal guarantees, the use of learning techniques to automate repetitive actions or optimize the extraction of relevant information, or the development of approaches combining static and dynamic analyses.
The project is led by CEA-List, it started in 2022 and lasts for 6 years.
The grant covers 3 PhD positions and 24 months of engineer position.
The Décysif project is a very new project started in december 2023, for 4 years. Its general goal is the promotion of formal verification for critical systems regarding cybersecurity. This project will fund our future research on Rust program verification, and it contains a workpackage dedicated towards industrialization of the Creusot tool.
The project is led by TrustInSoft company, with AdaCore and OCamlPro as other partners.
The Défi Inria LiberAbaci
is a collaborative project aimed at improving the accessibility of the Coq
interactive proof system for an audience of mathematics students in the early
academic years.
The head is Yves Bertot and the involved teams are: Cambium (Paris), Camus (Strasbourg), Gallinette (Nantes) PiCube (Paris), Spades (Grenoble), Stamp (Sophia Antipolis), Toccata (Saclay), LIPN (Villetaneuse).