A major challenge in modeling and scientific computing is the simultaneous mastery of hardware capabilities, software design, and mathematical algorithms for the efficiency and reliability of the computation. In this context, the overall objective of AriC is to improve computing at large, in terms of performance, efficiency, and reliability. We work on the fine structure of floating-point arithmetic, on controlled approximation schemes, on algebraic algorithms and on new cryptographic applications, most of these themes being pursued in their interactions. Our approach combines fundamental studies, practical performance and qualitative aspects, with a shared strategy going from high-level problem specifications and standardization actions, to computer arithmetic and the lowest-level details of implementations.
This makes AriC the right place for drawing the following lines of action:
According to the application domains that we target and our main fields of expertise, these lines of actions are declined in three themes with specific objectives.
The last twenty years have seen the advent of computer-aided proofs in mathematics and this trend is getting more and more important. They request:
fast and stable numerical computations;
numerical results with a guarantee on the error;
formal proofs of these computations or computations with a proof assistant.
One of our main long-term objectives is to develop a platform where one can study a computational problem on all (or any) of these three levels of rigor.
At this stage, most of the necessary routines are not easily available (or do not even exist) and one needs to develop ad hoc tools to complete the proof. We plan to provide more and more algorithms and routines to address such questions. Possible applications lie in the study of mathematical conjectures where exact mathematical results are required (e.g., stability of dynamical systems); or in more applied questions, such as the automatic generation of efficient and reliable numerical software for function evaluation.
On a complementary viewpoint, numerical safety is also critical in robust space mission design, where guidance and control algorithms become more complex in the context of increased satellite autonomy. We will pursue our collaboration with specialists of that area whose questions bring us interesting focus on relevant issues.
Floating-point arithmetic is currently undergoing a major evolution, in particular with the recent advent of a greater diversity of available precisions on a same system (from 8 to 128 bits) and of coarser-grained floating-point hardware instructions. This new arithmetic landscape raises important issues at the various levels of computing, that we will address along the following three directions.
One of our targets is the design of building blocks of computing (e.g., algorithms for the basic operations and functions, and algorithms for complex or double-word arithmetic). Establishing properties of these building blocks (e.g., the absence of “spurious” underflows/overflows) is also important. The IEEE 754 standard on floating-point arithmetic (which has been revised slightly in 2019) will have to undergo a major revision within a few years: first because advances in technology or new needs make some of its features obsolete, and because new features need standardization. We aim at playing a leading role in the preparation of the next standard.
We will pursue our studies in rounding error analysis, in particular for the “low precision–high dimension” regime, where traditional analyses become ineffective and where improved bounds are thus most needed. For this, the structure of both the data and the errors themselves will have to be exploited. We will also investigate the impact of mixed-precision and coarser-grained instructions (such as small matrix products) on accuracy analyses.
Most directions in the team are concerned with optimized and high performance implementations. We will pursue our efforts concerning the implementation of well optimized floating-point kernels, with an emphasis on numerical quality, and taking into account the current evolution in computer architectures (the increasing width of SIMD registers, and the availability of low precision formats). We will focus on computing kernels used within other axes in the team such as, for example, extended precision linear algebra routines within the FPLLL and HPLLL libraries.
We intend to strengthen our assessment of the cryptographic relevance of problems over lattices, and to broaden our studies in two main (complementary) directions: hardness foundations and advanced functionalities.
Recent advances in cryptography have broadened the scope of encryption functionalities (e.g., encryption schemes allowing to compute over encrypted data or to delegate partial decryption keys). While simple variants (e.g., identity-based encryption) are already practical, the more advanced ones still lack efficiency. Towards reaching practicality, we plan to investigate simpler constructions of the fundamental building blocks (e.g., pseudorandom functions) involved in these advanced protocols. We aim at simplifying known constructions based on standard hardness assumptions, but also at identifying new sources of hardness from which simple constructions that are naturally suited for the aforementioned advanced applications could be obtained (e.g., constructions that minimize critical complexity measures such as the depth of evaluation). Understanding the core source of hardness of today's standard hard algorithmic problems is an interesting direction as it could lead to new hardness assumptions (e.g., tweaked version of standard ones) from which we could derive much more efficient constructions. Furthermore, it could open the way to completely different constructions of advanced primitives based on new hardness assumptions.
Lattice-based cryptography has come much closer to maturity in the recent past. In particular, NIST has started a standardization process for post-quantum cryptography, and lattice-based proposals are numerous and competitive. This dramatically increases the need for cryptanalysis:
Do the underlying hard problems suffer from structural weaknesses? Are some of the problems used easy to solve, e.g., asymptotically?
Are the chosen concrete parameters meaningful for concrete cryptanalysis? In particular, how secure would they be if all the known algorithms and implementations thereof were pushed to their limits? How would these concrete performances change in case (full-fledged) quantum computers get built?
On another front, the cryptographic functionalities reachable under lattice hardness assumptions seem to get closer to an intrinsic ceiling. For instance, to obtain cryptographic multilinear maps, functional encryption and indistinguishability obfuscation, new assumptions have been introduced. They often have a lattice flavour, but are far from standard. Assessing the validity of these assumptions will be one of our priorities in the mid-term.
In the design of cryptographic schemes, we will pursue our investigations on functional encryption. Despite recent advances, efficient solutions are only available for restricted function families. Indeed, solutions for general functions are either way too inefficient for practical use or they rely on uncertain security foundations like the existence of circuit obfuscators (or both). We will explore constructions based on well-studied hardness assumptions and which are closer to being usable in real-life applications. In the case of specific functionalities, we will aim at more efficient realizations satisfying stronger security notions.
Another direction we will explore is multi-party computation via a new approach exploiting the rich structure of class groups of quadratic fields. We already showed that such groups have a positive impact in this field by designing new efficient encryption switching protocols from the additively homomorphic encryption we introduced earlier. We want to go deeper in this direction that raises interesting questions, such as how to design efficient zero-knowledge proofs for groups of unknown order, how to exploit their structure in the context of 2-party cryptography (such as two-party signing) or how to extend to the multi-party setting.
In the context of the PROMETHEUS H2020 project, we will keep seeking to develop new quantum-resistant privacy-preserving cryptographic primitives (group signatures, anonymous credentials, e-cash systems, etc). This includes the design of more efficient zero-knowledge proof systems that can interact with lattice-based cryptographic primitives.
The connections between algorithms for structured matrices and for polynomial matrices will continue to be developed, since they have proved to bring progress to fundamental questions with applications throughout computer algebra. The new fast algorithm for the bivariate resultant opens an exciting area of research which should produce improvements to a variety of questions related to polynomial elimination. Obviously, we expect to produce results in that area.
For definite summation and integration, we now have fast algorithms for single integrals of general functions and sequences and for multiple integrals of rational functions. The long-term objective of that part of computer algebra is an efficient and general algorithm for multiple definite integration and summation of general functions and sequences. This is the direction we will take, starting with single definite sums of general functions and sequences (leading in particular to a faster variant of Zeilberger's algorithm). We also plan to investigate geometric issues related to the presence of apparent singularities and how they seem to play a role in the complexity of the current algorithms.
Our expertise on validated numerics is useful to analyze and improve, and guarantee the quality of numerical results in a wide range of applications including:
Much of our work, in particular the development of correctly rounded elementary functions, is critical to the reproducibility of floating-point computations.
Lattice reduction algorithms have direct applications in
Best paper award at ARITH 2023 for the article `Towards Machine-Efficient Rational L
fplll contains implementations of several lattice algorithms. The implementation relies on floating-point orthogonalization, and LLL is central to the code, hence the name.
It includes implementations of floating-point LLL reduction algorithms, offering different speed/guarantees ratios. It contains a 'wrapper' choosing the estimated best sequence of variants in order to provide a guaranteed output as fast as possible. In the case of the wrapper, the succession of variants is oblivious to the user.
It includes an implementation of the BKZ reduction algorithm, including the BKZ-2.0 improvements (extreme enumeration pruning, pre-processing of blocks, early termination). Additionally, Slide reduction and self dual BKZ are supported.
It also includes a floating-point implementation of the Kannan-Fincke-Pohst algorithm that finds a shortest non-zero lattice vector. For the same task, the GaussSieve algorithm is also available in fplll. Finally, it contains a variant of the enumeration algorithm that computes a lattice vector closest to a given vector belonging to the real span of the lattice.
Abelian integrals play a key role in the infinitesimal version of Hilbert's 16th problem. Being able to evaluate such integrals - with guaranteed error bounds - is a fundamental step in computer-aided proofs aimed at this problem. Using interpolation by trigonometric polynomials and quasi-Newton-Kantorovitch validation, we develop a validated numerics method for computing Abelian integrals in a quasi-linear number of arithmetic operations. Our approach is both effective, as exemplified on two practical perturbed integrable systems, and amenable to an implementation in a formal proof assistant, which is key to provide fully reliable computer-aided proofs 4.
Software implementations of mathematical functions often use approximations that can be either polynomial or rational in nature. While polynomials are the preferred approximation in most cases, rational approximations are nevertheless an interesting alternative when dealing with functions that have a pronounced "nonpolynomial behavior" (such as poles close to the approximation domain, asymptotes or finite limits at
We deal with two complementary questions about approximation properties of ReLU networks. First, we study how the uniform quantization of ReLU networks with real-valued weights impacts their approximation properties. We establish an upper-bound on the minimal number of bits per coordinate needed for uniformly quantized ReLU networks to keep the same polynomial asymptotic approximation speeds as unquantized ones. We also characterize the error of nearest-neighbour uniform quantization of ReLU networks. This is achieved using a new lower-bound on the Lipschitz constant of the map that associates the parameters of ReLU networks to their realization, and an upper-bound generalizing classical results. Second, we investigate when ReLU networks can be expected, or not, to have better approximation properties than other classical approximation families. Indeed, several approximation families share the following common limitation: their polynomial asymptotic approximation speed of any set is bounded from above by the encoding speed of this set. We introduce a new abstract property of approximation families, called infinite-encodability, which implies this upper-bound. Many classical approximation families, defined with dictionaries or ReLU networks, are shown to be infinite-encodable. This unifies and generalizes several situations where this upper-bound is known 7.
This work introduces the first toolkit around path-norms that is fully able to encompass general DAG ReLU networks with biases, skip connections and any operation based on the extraction of order statistics: max pooling, GroupSort etc. This toolkit notably allows us to establish generalization bounds for modern neural networks that are not only the most widely applicable path-norm based ones, but also recover or beat the sharpest known bounds of this type. These extended path-norms further enjoy the usual benefits of path-norms: ease of computation, invariance under the symmetries of the network, and improved sharpness on feedforward networks compared to the product of operators' norms, another complexity measure most commonly used. The versatility of the toolkit and its ease of implementation allow us to challenge the concrete promises of path-norm-based generalization bounds, by numerically evaluating the sharpest known bounds for ResNets on ImageNet 28.
Sparse neural networks are mainly motivated by ressource efficiency since they use fewer parameters than their dense counterparts but still reach comparable accuracies. This article empirically investigates whether sparsity could also improve the privacy of the data used to train the networks. The experiments show positive correlations between the sparsity of the model, its privacy, and its classification error. Simply comparing the privacy of two models with different sparsity levels can yield misleading conclusions on the role of sparsity, because of the additional correlation with the classification error. From this perspective, some caveats are raised about previous works that investigate sparsity and privacy 23.
Floating-point numbers have an intuitive meaning when it comes to physics-based numerical computations, and they have thus become the most common way of approximating real numbers in computers. The IEEE-754 Standard has played a large part in making floating-point arithmetic ubiquitous today, by specifying its semantics in a strict yet useful way as early as 1985. In particular, floating-point operations should be performed as if their results were first computed with an infinite precision and then rounded to the target format. A consequence is that floating-point arithmetic satisfies the ‘standard model’ that is often used for analysing the accuracy of floating-point algorithms. But that is only scraping the surface, and floating-point arithmetic offers much more. In the survey 2 we recall the history of floating-point arithmetic as well as its specification mandated by the IEEE-754 Standard. We also recall what properties it entails and what every programmer should know when designing a floating-point algorithm. We provide various basic blocks that can be implemented with floating-point arithmetic. In particular, one can actually compute the rounding error caused by some floating-point operations, which paves the way to designing more accurate algorithms. More generally, properties of floating-point arithmetic make it possible to extend the accuracy of computations beyond working precision.
Assume we use a binary floating-point arithmetic and that
The computation of Fast Fourier Transforms (FFTs) in floating-point arithmetic is inexact due to roundings, and for some applications it can prove very useful to know a tight bound on the final error. Although it can be almost attained by specifically built input values, the best known error bound for the Cooley-Tukey FFT seems to be much larger than most actually obtained errors. Also, interval arithmetic can be used to compute a bound on the error committed with a given set of input values, but it is in general considered hampered with large overestimation. We report results of intensive computations to test the two approaches, in order to estimate the numerical performance of state-of-the-art bounds. Surprisingly enough, we observe that while interval arithmetic-based bounds are overestimated, they remain, in our computations, tighter than general known bounds 13.
Affine iterations of the form
As developers of libraries implementing interval arithmetic, we faced the same difficulties when it comes to testing our libraries. What must be tested? How can we devise relevant test cases for unit testing? How can we ensure a high (and possibly 100%) test coverage? In 1, before considering these questions, we briefly recall the main features of interval arithmetic and of the IEEE 1788-2015 standard for interval arithmetic. After listing the different aspects that, in our opinion, must be tested, we contribute a first step towards offering a test suite for an interval arithmetic library. First we define a format that enables the exchange of test cases, so that they can be read and tried easily. Then we offer a first set of test cases, for a selected set of mathematical functions. Next, we examine how the Julia interval arithmetic library, IntervalArithmetic.jl, actually performs to these tests. As this is an ongoing work, we list extra tests that we deem important to perform.
The IEEE 1788-2015 standard for interval arithmetic defines three accuracy modes for the so-called set-based flavor: tightest, accurate and valid. This work in progress 30 focuses on the accurate mode. First, an introduction to interval arithmetic and to the IEEE 1788-2015 standard is given, then the accurate mode is defined. How can this accurate mode be tested, when a library implementing interval arithmetic claims to provide this mode? The chosen approach is unit testing, and the elaboration of testing pairs for this approach is developed. A discussion closes this paper: how can the tester be tested? And if we go to the roots of the subject, is the accurate mode really relevant or should it be dropped off in the next version of the standard?
In 16
we design algorithms for the correct rounding of the power function
This work was done with Laurence Rideau (STAMP Team, Sophia). In 8, we consider the computation of the Euclidean (or L2) norm of an
We propose and analyze a simple strategy for constructing 1-key constrained pseudorandom functions (CPRFs) from homomorphic secret sharing. In the process, we obtain the following contributions. First, we identify desirable properties for the underlying HSS scheme for our strategy to work. Second, we show that (most) recent existing HSS schemes satisfy these properties, leading to instantiations of CPRFs for various constraints and from various assumptions. Notably, we obtain the first (1-key selectively secure, private) CPRFs for inner-product and (1-key selectively secure) CPRFs for NC 1 from the DCR assumption, and more. Lastly, we revisit two applications of HSS, equipped with these additional properties, to secure computation: we obtain secure computation in the silent preprocessing model with one party being able to precompute its whole preprocessing material before even knowing the other party, and we construct one-sided statistically secure computation with sublinear communication for restricted forms of computation. This is a joint by Geoffroy Couteau, Pierre Meyer, Alain Passelègue, and Mahshid Riahinia, published at Eurocrypt 2023 22.
Lyubashevky's signatures are based on the Fiat-Shamir with Aborts paradigm. It transforms an interactive identification protocol that has a non-negligible probability of aborting into a signature by repeating executions until a loop iteration does not trigger an abort. Interaction is removed by replacing the challenge of the verifier by the evaluation of a hash function, modeled as a random oracle in the analysis. The access to the random oracle is classical (ROM), resp. quantum (QROM), if one is interested in security against classical, resp. quantum, adversaries. Most analyses in the literature consider a setting with a bounded number of aborts (i.e., signing fails if no signature is output within a prescribed number of loop iterations), while practical instantiations (e.g., Dilithium) run until a signature is output (i.e., loop iterations are unbounded).
In this work, we emphasize that combining random oracles with loop iterations induces numerous technicalities for analyzing correctness, run-time, and security of the resulting schemes, both in the bounded and unbounded case. As a first contribution, we put light on errors in all existing analyses. We then provide two detailed analyses in the QROM for the bounded case, adapted from Kiltz et al. [EUROCRYPT'18] and Grilo et al. [ASIACRYPT'21]. In the process, we prove the underlying -protocol to achieve a stronger zero-knowledge property than usually considered for -protocols with aborts, which enables a corrected analysis. A further contribution is a detailed analysis in the case of unbounded aborts, the latter inducing several additional subtleties.
This is a joint work by Julien Devevey, Pouria Fallahpour, Alain Passelègue, and Damien Stehlé, published at Crypto 2023 14.
We describe an adaptation of Schnorr's signature to the lattice setting, which relies on Gaussian convolution rather than flooding or rejection sampling as previous approaches. It does not involve any abort, can be proved secure in the ROM and QROM using existing analyses of the Fiat-Shamir transform, and enjoys smaller signature sizes (both asymptotically and for concrete security levels).
This is a joint work by Julien Devevey, Alain Passelègue, and Damien Stehlé, published at Asiacrypt 2023 18.
Updatable public key encryption has recently been introduced as a solution to achieve forward-security in the context of secure group messaging without hurting efficiency, but so far, no efficient lattice-based instantiation of this primitive is known.
In this work, we construct the first LWE-based UPKE scheme with polynomial modulus-to-noise rate, which is CPA-secure in the standard model. At the core of our security analysis is a generalized reduction from the standard LWE problem to (a stronger version of) the Extended LWE problem. We further extend our construction to achieve stronger security notions by proposing two generic transforms. Our first transform allows to obtain CCA security in the random oracle model and adapts the Fujisaki-Okamoto transform to the UPKE setting. Our second transform allows to achieve security against malicious updates by adding a NIZK argument in the update mechanism. In the process, we also introduce the notion of Updatable Key Encapsulation Mechanism (UKEM), as the updatable variant of KEMs. Overall, we obtain a CCA-secure UKEM in the random oracle model whose ciphertext sizes are of the same order of magnitude as that of CRYSTALS-Kyber.
This is a joint work by Calvin Abou Haidar, Alain Passelègue, and Damien Stehlé, published at Asiacrypt 2023 19.
The presumed hardness of the Shortest Vector Problem for ideal lattices (Ideal-SVP) has been a fruitful assumption to understand other assumptions on algebraic lattices and as a security foundation of cryptosystems. Gentry [CRYPTO'10] proved that Ideal-SVP enjoys a worst-case to average-case reduction, where the average-case distribution is the uniform distribution over the set of inverses of prime ideals of small algebraic norm (below
In this work, we show that Ideal-SVP for the uniform distribution over inverses of small-norm prime ideals reduces to Ideal-SVP for the uniform distribution over small-norm prime ideals. Combined with Gentry's reduction, this leads to a worst-case to average-case reduction for the uniform distribution over the set of small-norm prime ideals. Using the reduction from Pellet-Mary and Stehlé [ASIACRYPT'21], this notably leads to the first distribution over NTRU instances with a polynomial modulus whose hardness is supported by a worst-case lattice problem.
This is a joint work by Joël Felderhoff, Alice Pellet-Mary, Damien Stehlé, and Benjamin Wesolowski published at TCC 2023 15.
A power series being given as the solution of a linear differential equation with appropriate initial conditions, minimization consists in finding a non-trivial linear differential equation of minimal order having this power series as a solution. This problem exists in both homogeneous and inhomogeneous variants; it is distinct from, but related to, the classical problem of factorization of differential operators. Recently, minimization has found applications in Transcendental Number Theory, more specifically in the computation of non-zero algebraic points where Siegel’s E-functions take algebraic values. We present algorithms for these questions and discuss implementation and experiments 3.
Six families of generalized hypergeometric series in a variable
A new Las Vegas algorithm is presented for the composition of two polynomials modulo a third one, over an arbitrary field. When the degrees of these polynomials are bounded by
We show that for solutions of linear recurrences with polynomial coefficients of Poincaré type and with a unique simple dominant eigenvalue, positivity reduces to deciding the genericity of initial conditions in a precisely defined way. We give an algorithm that produces a certificate of positivity that is a data-structure for a proof by induction. This induction works by showing that an explicitly computed cone is contracted by the iteration of the recurrence 17.
Creative telescoping is an algorithmic method initiated by Zeilberger to compute definite sums by synthesizing summands that telescope, called certificates. We describe a creative telescoping algorithm that computes telescopers for definite sums of D-finite functions as well as the associated certificates in a compact form. The algorithm relies on a discrete analogue of the generalized Hermite reduction, or equivalently, a generalization of the Abramov-Petkovšek reduction. We provide a Maple implementation with good timings on a variety of examples 27.
A new algorithm is presented for computing the resultant of two “sufficiently generic” bivariate polynomials
over an arbitrary field. For such
Quasiseparable matrices are a class of rank-structured matrices widely used in numerical linear
algebra and of growing interest in computer algebra, with applications in e.g. the linearization of
polynomial matrices. Various representation formats exist for these matrices that have rarely been
compared.
We show how the most central formats SSS and HSS can be adapted to symbolic computation, where
the exact rank replaces threshold based numerical ranks.
We clarify their links and compare them with the Bruhat format. To this end, we
state their space and time cost estimates based on fast matrix multiplication, and compare them,
with their leading constants. The comparison is supported by software experiments.
We make further progresses for the Bruhat format, for which we give a generation algorithm,
following a Crout elimination scheme, which specializes into fast algorithms for the construction
from a sparse matrix or from the sum of Bruhat representations 20, 25.
A new algorithm is presented for computing the largest degree invariant factor of the Sylvester
matrix (with respect either to
Bosch (Stuttgart) ordered from us some support for the design and implementation of accurate functions in binary32 floating-point arithmetic (inverse trigonometric functions, hyperbolic functions and their inverses, exponential, logarithm, ...).
Associated team Symbolic, Canada-France, 2022-2024, University of Waterloo and Inria.
From Monash University, Australia. Title: Attracteur de Hénon; intégrales abéliennes liées aux 16e problème de Hilbert
Summary: The goal of the proposed research program is to unify the techniques of modern scientific computing with the rigors of mathematics and develop a functional foundation for solving mathematical problems with the aid of computers. Our aim is to advance the field of computer-aided proofs in analysis; we strongly believe that this is the only way to tackle a large class of very hard mathematical problems.
SecureCompute is a France 2030 ANR 6-year project (started in July 2022) focused on the study of cryptographic mechanisms allowing to ensure the security of data, during their transfer, at rest, but also during processing, despite uncontrolled environments such as the Internet for exchanges and the Cloud for hosting and processing. Security, in this context, not only means confidentiality but also integrity, a.k.a. the correct execution of operations. See the web page of the project. It is headed by ENS-PSL (Inria CASCADE team-project), and besides AriC, also involves CEA, IRIF (Université Paris Cité), and LIRMM (Université de Montpellier).
The project ended prematurely on August 31, 2023, following the departure of Alain Passelègue.
PostQuantum-TLS is a France 2030 ANR 5-year project (started in 2022) focused on post-quantum cryptography. The famous "padlock" appearing in browsers when one visits websites whose address is preceded by "https" relies on cryptographic primitives that would not withstand a quantum computer. This integrated project aims to develop post-quantum primitives in a prototype of "post-quantum lock" that will be implemented in an open source browser. The evolution of cryptographic standards has already started, the choice of new primitives will be made quickly, and the transition will be made in the next few years. The objective is to play a driving role in this evolution and to make sure that the French actors of post-quantum cryptography, already strongly involved, are able to influence the cryptographic standards of the decades to come.
Benjamin Wesolowski (UMPA) replaced Damien Stehlé as the leader in Lyon for this project.
RAGE is a four-year project (started in January 2021) focused on the randomness generation for advanced cryptography. See the web page of the project. It is headed by Alain Passelègue and also involves Pierre Karpmann (UGA) and Thomas Prest (PQShield). The main goals of the project are: (i) construct and analyze security of low-complexity pseudorandom functions that are well-suited for MPC-based and FHE-based applications, (ii) construct advanced forms of pseudorandom functions, such as (private) constrained PRFs.
The project ended prematurely on August 31, 2023, following the departure of Alain Passelègue.
CHARM is a three-year project (started in October 2021) focused on the cryptographic hardness of module lattices. See the web page of the project. It is co-headed by Shi Bai (FAU, USA) and Damien Stehlé, with two other sites: the U. of Bordeaux team led by Benjamin Wesolowski (with Bill Allombert, Karim Belabas, Aurel Page and Alice Pellet-Mary) and the Cornell team led by Noah Stephens-Davidowitz. The main goal of the project is to provide a clearer understanding of the intractability of module lattice problems via improved reductions and improved algorithms. It will be approached by investigating the following directions: (i) showing evidence that there is a hardness gap between rank 1 and rank 2 module problems, (ii) determining whether the NTRU problem can be considered as a rank 1.5 module problem, (iii) designing algorithms dedicated to module lattices, along with implementation and experiments.
Following the departures of Guillaume Hanrot and Damien Stehlé, Benjamin Wesolowski (UMPA) took the lead on this project.
The Hybrid HPC Quantum Initiative is a France 2030 ANR 5-year project (started in 2022) focused on quantum computing. We are involved in the Cryptanalysis work package. The application of quantum algorithms for cryptanalysis is known since the early stages of quantum computing when Shor presented a polynomial-time quantum algorithm for factoring, a problem which is widely believed to be hard for classical computers and whose hardness is one of the main cryptographic assumptions currently used. Therefore, with the development of (full-scalable) quantum computers, the security of many cryptographic protocols of practical importance would be broken. Therefore, it is necessary to find other computational assumptions that can lead to cryptographic schemes that are secure against quantum adversaries. While we have candidate assumptions, their security against quantum attacks is still under scrutiny. In this work package, we will study new quantum algorithms for cryptanalysis and their implementation in the hybrid platform of the national platform. The goal is to explore the potential weaknesses of old and new cryptographic assumptions, potentially finding new attacks on the proposed schemes.
The project ended prematurely on March 31, 2023, following the departure of Damien Stehlé, but Benjamin Wesolowski (UMPA) is still involved.
NuSCAP (Numerical Safety for Computer-Aided Proofs) is a four-year project started in February 2021. See the web page of the project. It is headed by Nicolas Brisebarre and, besides AriC, involves people from LIP lab, Galinette, Lfant, Stamp and Toccata INRIA teams, LAAS (Toulouse), LIP6 (Sorbonne Université), LIPN (Univ. Sorbonne Paris Nord) and LIX (École Polytechnique). Its goal is to develop theorems, algorithms and software, that will allow one to study a computational problem on all (or any) of the desired levels of numerical rigor, from fast and stable computations to formal proofs of the computations.
AMIRAL is a four-year project (starting in January 2022) that aims to improve lattice-based signatures and to develop more advanced related cryptographic primitives. See the web page of the project. It is headed by Adeline Roux-Langlois from Irisa (Rennes) and locally by Alain Passelègue. The main goals of the project are: (i) optimize the NIST lattice-based signatures, namely CRYSTALS-DILITHIUM and FALCON, (ii) develop advanced signatures, such as threshold signatures, blind signatures, or aggregated signatures, and (iii) generalize the techniques developed along the project to other related primitives, such as identity-based and attribute-based encryption.
The project ended prematurely on August 31, 2023, following the departure of Alain Passelègue.