Nowadays, and contrary to the past decades, the design of cryptographic algorithms follows an integrated approach which considers security, efficiency and implementation requirements at the same time. The research activities of the team CAPSULE tackle these challenges in order to provide more secure cryptographic implementations and applications deployed in the real world.
The seminal paper of Peter Shor at FOCS 1994 63 shows that if we were able to build quantum computers, then the factorization and discrete logarithm problems could be solved in polynomial time. Since then, there is a tremendous effort in the cryptographic community to propose cryptosystems that are secured in the presence of quantum computers. Many alternatives to the two number theoretic problems above have been proposed. Among them, our team already has activities and interests in two types of assumptions:
Euclidean lattices are discrete subgroups of
In post-quantum cryptography, lattice-based assumptions take an important place and received an increasing amount of attention in the last decade, thank to the strong security guarantees provided by these assumptions as well as their flexibility for cryptographic designs. Indeed, Ajtai and Regev presented reductions between, respectively, finding Short Integer Solutions of random linear systems (SIS) or solving random noisy linear system (“Learning With Errors”, LWE) and computing short vectors in euclidean lattices in the worst case. They both served as the fundation of security to design public-key encryptions, digital signatures, zero-knowledge proof systems, key-encapsulation mechanisms, homomorphic encryptions, ... In order to improve practical efficiency, "structured" versions of these problems relying on lattices with symmetries have been proposed. Such lattices are related to algebraic objects appearing in the geometry of numbers and some of the resulting schemes have been the clear winners of NIST's call for standardization.
Our trust in the hardness of lattice-based constructions relies fundamentally on our understanding of the security reductions between the (many, structured) variants of SIS and LWE. Depending on the additional structure allowed to the designer, they are associated to number rings, ideals and more generally modules over the integer ring of a number field, and related to the corresponding class of lat- tices with symmetries. Additionnally, for LWE the noise distributions is also a parameter of the problem. Overall, this leads to a plethora of variants and versions that need some hierarchizing and a better understanding of the interplay between their related parameters. Thankfully, important classifying works have already been presented, regularly involving members of our team (e.g. 35, 61, 32.
Yet, there are still many unclear results or relations that are not yet satisfyingly understood.
For example, the fundamental reductions of Ajtai and Regev are far from tight, incurring a blowup in important
parameters (sometime estimated to be in
We have proposed some algorithms to study the security of hard computational problems in cyclotomic fields as the Principal Ideal Problem (PIP) in 29, reducing module lattices as a generalization of the LLL algorithm in the ring of integers of a number field in 55 or in a tower of cyclotomic fields in 52. We generalized the BKW algorithm to binary LWE setting in 53 and studied the Learning Parities with Noise (LPN) Problem in 56.
We have also attacked concrete cryptographic schemes. We broke some multivariate schemes such as the SFLASH signature schemes in 43 and variants 48, and the ASASA schemes in 58. We have also broken FHE schemes based on overstretched NTRU parameters in 54 or concrete FHE in 38.
We want to study the resistance of post-quantum cryptosystems and hard problems against classical and quantum adversaries. It is particularly interesting for lattice problems since the cryptanalysis of these problems is very young. One key objective in this line of research would be to find an analog of the BKZ algorithm for structured lattices defined over number field. It is also interesting to improve the recent work of 27, which suggests that this problem may be weaker than previously thought.
Applications of cryptography usually culminates with the description of an efficient cryptosystem. An important part of our activity in post-quantum cryptography therefore targets the design of new schemes resistant to quantum attackers, providing advanced functionalities to its users, without sacrificing in efficiency.
In this area, members of CAPSULE have worked on the lattice-based signature scheme Falcon and its efficiency-security trade-off ModFalcon 39. A first objective would be to extend in a useful way the so-called “trapdoor generation” which is core to the two schemes above. In a nutshell, the secret key corresponds to a basis of short vectors of a lattice, that only the user should be able to compute efficiently. ModFalcon already extended the class of lattices for which this can be done, and it is an interesting question to manage an even larger class of lattice. In terms of applications, this would allow for even more flexibility, which can be particularly useful when the signature scheme is used as a black box inside a larger cryptographic algorithm. It could also allow for other functionalities such as threshold signatures or maybe masked signatures. On this line of thought, we are also interested in designing masked lattice signatures or even multi-party signatures. While there have been very recent proposals (relying on a different paradigm than the Falcon family), the efficiency is still lacking in practice. A success here could lead to concrete industrial applications.
But this is not the only construction on which the team is currently working. There are many interesting cryptographic constructions that need to be studied to obtain efficient post-quantum schemes, such as signatures and zero-knowledge proofs, but also signatures with more properties like group signatures, blind signatures ... and applications like e-voting. Indeed, a lot of progress have been made to obtain efficient signatures and public key encryptions, especially with the NIST competition, but the efficiency of more advanced schemes is still far from existing (but not post-quantum) solutions. One of the big challenge would be to obtain efficient zero-knowledge proof systems, as this primitive is often an easy way to build more advanced primitives.
Despite being one of the oldest forms of cryptography, symmetric cryptography is a very active research area, with recent activity focusing on new designs optimized for specific operational constraints. For example, the lightweight cryptography competition launched by the NIST1 in 2017 concluded in 2023 by selecting the lightweight cipher family Ascon 42, optimized for hardware implementations. At the same time, many new ciphers have been proposed which are optimized to be integrated in advanced cryptographic protocols, such as the FHE-friendly block cipher LowMC, or protected hardware implementations.
The team CAPSULE studies the security of symmetric primitives such as block ciphers, stream ciphers and hash functions, against various types of attacks. We consider both classical and quantum security, the latter being a prerequisite for post-quantum cryptography architectures.
Symmetric cryptosystems are widely used because they are the only ones that can achieve some major functionalities such as high-speed or low-cost encryption, fast message authentication, and efficient hashing. But, unlike public-key cryptographic algorithms, secret-key primitives do not have satisfying security proofs. The security of these algorithms is empirically established by cryptanalysis.
It is obvious that this security criterion, despite its so far success, is not completely satisfactory. For instance we may estimate that, for a given primitive, no more than a few dozens of researchers are actively working on breaking it. Hence, due to this weak effort, the non-discovery of an attack against a particular primitive does not mean so much. Besides, finding the best attacks on a given design is a time-consuming work, and errors can lead to under- or over-estimating its security.
Therefore, our team specializes in building tools for automatically finding large classes of attacks. This transforms the statement “we did not find any attack of this kind”, which is only a subjective guarantee, into “the audit tool X did not find any attack”, which is a formal statement, giving a quantifiable objective guarantee.
In the past, the members of the team have proposed many tools, for example for improving attacks on round-reduced versions of AES 33, Demirci-Selçuk attacks on AES 41, and impossible differential attacks 40.
Our more recent work uses tools based on MILP (Mixed Integer Linear Programming), SAT (Satisfiability) or CP (Constraint Programming). In this setting, the search and optimization of an attack are reduced to a problem of a specific form, for which an off-the-shelf solver is used. Besides the actual work of implementing this reduction, our research aims at better understanding the differences between these optimization tools, finding which ones are more adapted for a given problem, and adapting some of these general-purpose softwares to particular cryptographic problems.
Finding and optimizing a cryptanalytic attack in its entirety is an especially interesting problem, since it requires to integrate different steps (for example a good distinguisher and a key-recovery phase). Since the search space is of exponential size, often making the problem intractable, it is possible to first find an approximation of the best attacks and then instantiate precisely the values of the parameters. Also, if MILP, SAT and CP tools quickly give an answer, it is tempting to build ad-hoc tools that can more efficiently take into account the weaknesses discovered by these tools.
Finally, there are only a few tools for analyzing the security of ARX ciphers based on additions, rotations and xor operations. These functions are hard to analyze with the current cryptanalytic techniques, and no attack has really endangered the full Chacha stream cipher proposed by Dan Bernstein or the block cipher Speck proposed by the NSA. They can be implemented very efficiently in x86 processors and currently Chacha is in the most used ciphersuites on TLS, making them prominent targets for cryptanalysis.
Our goal is to analyze the security of the new symmetric-key designs by developing new cryptanalytic techniques. The LowMC block cipher is one of the first symmetric primitives designed for taking into account the efficiency constraints of public-key cryptosystems. It has been built as a FHE-friendly cipher, by minimizing the number of multiplicative gates which are the main efficiency bottleneck for this application. Several attacks have been proposed on LowMC and LowMC v2. LowMC v3 was used in Picnic, a Zero-Knowledge-based post-quantum signature scheme proposed at the NIST competition, which wasn't standardized.
The Keccak hash function has been standardized in 2015 as SHA-3. Keccak brought new interest in a new design called Sponge function and permutation-based primitives. Some round-reduced versions of SHA-3 have been used in many constructions from Pseudo-Random Generator in SHAKE, to the Pseudo-Random Function Farfalle 28, the authenticated ecryption scheme Keyak, or the hash function KangarooTwelve proposed as an RFC. Only a few attacks have been proposed against SHA-3 and new cryptanalysis tools need to be designed.
Since 2016, many works have been done in the cryptanalysis of symmetric primitives using quantum algorithms. While symmetric cryptosystems are generally believed to hold well against adversaries equipped with a quantum computer, these works have substantiated these claims with dedicated security analyses, such as the best attacks against reduced-round versions of the standard AES 30.
Grover's search algorithm, which can provide a quadratic speedup on exhaustive key search (from
In team CAPSULE, our research in quantum cryptanalysis is three-fold.
First, we develop new quantum algorithms for cryptanalytic problems, which we aim to apply in symmetric cryptography, but may also have applications in public-key cryptography. An example of such a double-edged sword is our recent work on quantum walks 9.
Second, we analyze existing classical cryptanalysis techniques and study how to translate them into quantum cryptanalysis techniques. Intuitively, a primitive that is classically vulnerable should be quantumly broken as well, but this is not always the case, as classical attack strategies are not always exploitable in the quantum setting. Our research in this area focuses on the strategies which can exhibit the largest quantum speedups, quadratic (like Grover's search) or even above by using advanced frameworks.
Finally, after identifying new classes of quantum attacks, we aim at integrating these attacks into automated tools. Indeed, the task of finding and optimizing quantum attacks can be even more challenging that classical ones, since they rely often on different strategies, sometimes counterintuitive. Furthermore, since the resulting procedures are quantum algorithms, the analysis of their time and memory complexities comes with specific technicalities. Our goal is to automatize this step as well in a way that may benefit cryptanalysts interested in this topic but unfamiliar with quantum algorithms.
In this research axis, our aim is to study the security of implementations against various side channels such as fault attacks, power analysis and electromagnetic emanations, as well as timing attacks on various cryptographic schemes deployed in real-world systems. We are also interested in providing security proofs for real-world systems or improving their security.
Side Channel Attacks (SCA) rely on statistical tools to extract the secret information from leakage traces.
Then, algorithmic techniques usually based on previous cryptanalytic results are used to efficiently recover secret data.
Indeed, the known black-box attacks are extended by exploiting the leakage information, that gives more information on the internal secret variables, a.k.a. the grey-box model.
The SCA information can be for instance the Hamming weight of a limited number of variables.
Recently, the white-box model has been proposed, where the adversary can stop the execution of a process and has access to all variables.
Side-channel attacks have been successfully applied to break many embedded implementations these last 20 years.
After the information theoretic approach of Ishai, Sahai and Wagner 50 to prove the security of implementations, secure theoretical foundations have been laid by Prouff and Rivain and later Duc et al. in 60, 44.
Soon after, some tools have been developed such as 23, 24, 22 to protect software and hardware implementations with masking techniques.
Nowadays, we have sound masking schemes. Some of them already have been introduced into lattice-based implementations 25, where generally securing randomness presents an interesting challenges.
We aim at extending the results of 25, 26, 57, 47 to other post-quantum alternatives like code-based, multivariate, or hash-based schemes and to provide secure implementations.
More recently, other tools coming from statistical learning (such as deep learning) have been proposed to break embedded implementations.
They open the door to powerful techniques and more efficient attacks.
Template attacks model the leakage distribution with a Gaussian distribution, approximating the actual distribution by considering its mean and its standard deviation. More standard attacks, a.k.a. Differential Power Analysis (DPA), only consider the mean.
However, higher moments can be useful to consider. Deep learning techniques are useful to efficiently extract complex relations between variables even in the presence of noise.
Taking into account these more powerful deep learning or white-box attacks as well as developing countermeasures is a hot, trendy topic in SCA.
In the former, deep learning allow to find correlations between many points of interest of one curve, a.k.a. horizontal attacks.
In the latter, white-box cryptography provides the adversary with the same kind of information, since they can stop the execution of the program and get noiseless information on all of its variables.
Taking into account such powerful attackers is one main challenge for side-channel attacks.
Finally, we are interested to work on the new micro-architectural attacks HertzBleed and others. These attacks show that side-channel attacks are also a threat to software implementations. Porting to software some of the many techniques used to secure embedded systems is thus a major topic.
Constant-time implementation is a programming principle that aims at providing code where the running time and memory accesses are independent of the secret values. Timing leakage can be used to mount attacks on computers and smartphones. There exist many tools in the litterature that help developers to avoid these leakage, but insecure implementations are stille aplenty. For instance, we recently broke the WPA-3 implementation used in FreeRadius and iwd (iNet Wireless Daemon) 34, and also found other weaknesses.
We want to discover new attacks in open-source libraries and to help developers in order to verify the constant-time property of their codes.
For example, some tools are tailored to small pieces of cryptographic codes and do not scale well with more complex codes that rely on many libraries.
Our goal is to provide verification tools for analyzing the constant-time property of large source codes. We are also
interested in studying the security of DRM systems used in widely deployed systems.
We do not have permanent researchers on reverse-engineering, but we work with postdoc students such as Alexandre Gonzalvez, as well as Mohamed Sabt from the Spicy team on this topic.
Besides, we co-supervise 3 theses on the security of software implementations.
We are interested in studying the security of cryptographic protocols deployed in the real-world such as WhatsApp, middlebox, Content-Delivery Network (CDN), TLS, and 5G networks. Recently, we have also considered the security of searchable symmetric encryption, where the goal is to outsource the storage of a database to an untrusted server, while maintaining search capabilities. This last area is a nice application of secure computations and the PhD thesis of R. Bost (P.A. Fouque's PhD student) in this domain received the GDR Security price of the best PhD in 2018. We also work with Cristina Onete, an assistant professor at Limoges on this topic. Currently, we are interested to propose hybridization techniques between pre- and post-quantum cryptography for various protocols such as Signal, IPSEC, ... in the PEPR post-quantum cryptography.
The research community is strongly involved in the development and evolution of cryptographic standards. Many standards are developed through open competitions (e.g. AES, SHA-3) where multiple teams propose new designs, and a joint cryptanalysis effort allows to select the most suitable proposals. The analysis of established standards is also an important work, in order to depreciate weak algorithms before they can be exploited. Several members of the team have been involved in this type of effort and we plan to continue this work to ensure that secure algorithms are widely available. We believe that good cryptographic standards have a large socio-economic impact; thus, we are active in proposing schemes to future competitions, and in analyzing schemes proposed to current or future competitions, as well as widely-used algorithms and standards. At the moment, we are involved in the two standardization efforts run by NIST for post-quantum cryptography and lightweight cryptography, and other real-world protocols.
The NIST post-quantum competition aims at standardizing quantum-safe public-key primitives. The goal is to propose a quantum-safe alternative for the schemes based on number theory which are threatened by the advent of quantum computers. It is expected to have a huge and long-term impact on all public-key cryptography. It received 69 proposals in November 2017. The Falcon signature scheme, co-designed by some members of the Capsule team, has been selected by NIST in July 2022. We have also submitted Solmae to the Korean Post-Quantum Competition, which is a variant of Falcon that is easier to implement hence to protect from SCA. Finally, we have also proposed BAT 46, an encryption scheme that follows the design rationale of Falcon. We plan to submit this scheme to the IETF as it enjoys interesting properties in terms of bandwidth, that not displayed by NIST's selected key encapsulation scheme, Kyber.
In June 2023, we have submitted the PROV and VOX signature schemes to NIST's new call for digital signatures. These two schemes are based on multivariate cryptography problems, and are variants of the unbalanced Oil-and-Vinegar signature schemes, proposed in 1997 by Patarin. PROV has a security proof, while VOX is a stronger version of UOV that avoids known weaknesses (namely, UOV has a large set of isotropic vectors common to all quadratic forms of the public key).
The NIST lightweight cryptography standardization process is an initiative to develop and standardize new authenticated encryption algorithms suitable for constrained devices. There is a real need for new standards in lightweight cryptography, and the selected algorithms are expected to be widely deployed within the Internet of Things, as well as on more constrained devices such as contactless smart cards, or medical implants. The NIST received 56 submissions in February 2019. Team Capsule has studied the security of some of these schemes.
While we are very involved in the design phase of new cryptographic standards, we also monitor the algorithms that are already standardized. We look at some implementations of WPA3 and we discovered a micro-architectural attack 8. We also study the privacy of the EME standard (Encrypted Media Extensions) for Digital Rights Managments in browsers in 18.
After the discovery of some privacy issues in EME, our findings have been timely communicated to all concerned parties following responsible disclosure processes. Mozilla Firefox was quite responsive, and we got rewarded via their bug bounty program. The Mozilla EME team investigated our findings and released a patch to address the identified privacy issues and acknowledged us in the Mozilla Hall of Fame. Regarding Client ID being in clear in renewal requests, we first contacted the EME Chrome team that reviewed our disclosure report and showed concerns about its privacy consequence, namely the EME user-agent. They confirmed our intuition that the problem is caused by the Widevine CDM. Therefore, we filed a Widevine bug report about missing Privacy Mode on VMP systems, and are still in communication with them.
Concerning our micro-architectural attack on WPA3, we disclosed our findings to the hostap security team in December 2021. We contacted other affected projects (iwd/ell from Intel and FreeRadius) in January 2022. hostap promptly reacted, asking us to review a patch, which later was committed, and a security advisory has been published. Intel decided to fix their cryptographic library, ell, and also asked us to review their patch. Both iwd and hostap released a new stable version patching the vulnerability soon after our disclosure. FreeRadius has committed our patch to their project. We contacted OpenSSL and WolfSSL in May 2022 to disclose our second vulnerability. Both acknowledged our analysis, but argued that it is the developers' responsibility to avoid calling their leaky functions with secret-dependent values.
Pierre-Alain Fouque was appointed Senior member of the IUF (Institut Universitaire de France) for 5 years starting in September 2023.
The code that we develop is for demonstration or specific for some attacks, or implementation. Consequently, we do not work on any software. We do not use a particular platform. Some of the data we use in some work are made available.
This year we developed a new cryptanalysis technique related to differential cryptanalysis which allowed us to break more rounds of two well-studied block ciphers: the AES and SKINNY. We also developed a new tool permitting to fully automatize the search of the best differential characteristics on a large class of ciphers. Finally, we solved an algorithmic problem related to AES by providing a dynamic-programming based algorithm able to find the best truncated related-key differential characteristics on all versions of AES.
Meet-in-the-middle and differential attacks are two cornerstones of modern cryptanalysis, which have been applied successfully on block ciphers for decades. In 4, we introduced the new framework of differential meet-in-the-middle attacks, which combines technique from both meet-in-the-middle and differential cryptanalysis. As such, this technique can be seen both as an extension of meet-in-the-middle attacks, and as a novel way of performing the key-recovery in differential attacks. We applied this technique to two very well studied ciphers, SKINNY and the international standard AES, and obtained new results on weakened (reduced-round) variants, including a related-key attack on 12 rounds out of 14 on AES-256 with only two related keys.
The related-key setting, in which a block cipher may be queried with several unknown keys having some known relation, is a scenario in which AES is known to be quite weak. However, finding related-key characteristics is a difficult process which nowadays can be done only with the help of automatic tools. In 10 we gave new tools dedicated to this task, both ad hoc and based on MILP. We also built a new tool to search for differential MITM attacks, which improved the 12-rounds attack above to 13 rounds.
An important criteria to assert the security of a cryptographic primitive is its resistance against differential cryptanalysis. For word-oriented primitives, a common technique to determine the number of rounds required to ensure the immunity against differential distinguishers is to consider truncated differential characteristics and to count the number of active S-boxes. Doing so allows one to provide an upper bound on the probability of the best differential characteristic with a reduced computational cost. However, in order to design very efficient primitives, it might be needed to evaluate the probability more accurately. This is usually done in a second step, during which one tries to instantiate truncated differential characteristics with actual values and computes its corresponding probability. This step is usually done either with ad-hoc algorithms or with CP, SAT or MILP models that are solved by generic solvers. In 12, we present a generic tool for automatically generating these models to handle all word-oriented ciphers. Furthermore the running times to solve these models are very competitive with all the previous dedicated approaches.
We also focused on equivalences between Generalised Feistel Networks (GFN) of type-II. We introduced a new definition of equivalence which captures the concept that two GFNs are identical up to re-labelling of the inputs/outputs and are therefore cryptographically equivalent for several classes of attacks. It induces a reduction of the space of possible GFNs: the set of the
During the internships of Mathieu Degré and Lucie Lahaye, we started studying the cryptanalysis of the ASCON family of lightweight primitives, which is a high-profile target as it has been recently selected for standardization by the NIST. Our goal in this project was to obtain simpler modelings of different types of cryptanalysis, based on MILP and SAT encodings, which were easier not only to describe but also to run. Our results are under submission.
During this year, we have introduced several advanced quantum algorithms with applications in cryptanalysis (symmetric and asymmetric), as well as new frameworks for symmetric cryptanalysis which we plan to build upon in the next few years.
In 9 we introduced the new algorithmic technique of chained quantum walks. This technique is key to an improvement on quantum algorithms for the multiple collision search problem, an extension of the collision search problem, which asks to find pairs of colliding outputs generated by a random function. It allowed us to improve the previous best quantum algorithm for lattice sieving 37, reducing its asymptotic time complexity from
In 19, we gave new quantum trade-offs for the Dihedral Coset Problem, which is a computational problem of high interest. In particular, its hardness underlies the security of post-quantum cryptosystems based on Abelian group actions such as CSIDH 36. These cryptosystems are the only high-profile post-quantum proposals for which a quantum attacker enjoys a large speedup (from exponential classical time to subexponential quantum time). Thus their security analysis relies primarily on the quantum side.
In 20 we introduced a new framework of quantum linear key-recovery attacks on block ciphers. In classical cryptanalysis, linear cryptanalysis is a powerful key-recovery attack exploiting the linear biases which may appear in reduced-round ciphers. While modern linear key-recovery attacks rely on the Fast Fourier Transform, a potential application of the Quantum Fourier Transform remained an open question in previous works 51. In this work, we showed that this was possible, and could lead to new quantum attacks on block ciphers. The new framework relies on computing correlations of Boolean functions “analogically” in the amplitudes of quantum states, which generates technical difficulties and new open questions that need further investigation. Despite the current limitations, this framework may reach up to a super-quadratic speedup in key-recovery attacks, which coïncides with the current best speedup reported in 31 for specific constructions of block ciphers.
In 6 we described a generic framework of quantum impossible differential attacks on block ciphers, with a generic formula from their time complexity and a procedure to easily translate classical attacks into quantum ones.
In 7 we introduced an automatic search and optimization tool for quantum meet-in-the-middle key-recovery attacks on block ciphers. This tool expands a previous modeling technique which we introduced in the past year 62.
This year we proposed new designs and trade-offs for the core building blocks of hash-then-sign lattice-based signatures. This is in direct continuation with the results obtained last year, and initiated a participation to a national standardization competition (namely, South Korea's KPQC) as a direct application of our new results. We also submitted two Multivariate Cryptography signature schemes, PROV and VOX, to the recent call in June 2023. Finally, we obtained new results in the construction of advanced cryptographic protocols.
Last year we described in 45 Mitaka, a variant of Falcon with much simpler implementation, larger parameter space, better parallelism and easier to protect from side-channel adversaries.
One of the drawback of our scheme was its slightly lower security level.
Indeed, finding good trapdoors in the key-generation process for Mitaka remained a costly task, and we had to sacrifice security for concrete efficiency.
In 13 we propose a completely novel approach to find good trapdoors in an essentially optimal way, with much freedom in the quality that we impose on them.
Our method stems from the geometric identification of the space where one can find them, and the design of a simple, natural sampler to draw from.
It combines Fast Fourier Transform techniques with fine-grained error rounding analysis into a key-generation algorithm called Antrag, achieving the same efficiency as Falcon's, without sacrificing any bit-security.
Concretely, this technique make Mitaka as secure as Falcon.
This prompted its use into a candidate for standardization in a national process (as mentioned above), called Solmae (Falcon in Korean), and submitted for the first round of evaluation.
Despite Solmae having the minimal bandwith consumption of all candidates and concrete signing speed on par with the other lattice contender HaeTae (Dilithium's natural update), it was not selected to advance to the second round2.
Lattice Gaussian sampling is nowaday a pervasive technique in all aspects of the field: reductions between problems (in the complexity theory sense), as a concrete building block for primitives, in security arguments,...
Many methods to sample lattice Gaussians existed in the toolbox of the cryptographers, but perhaps surprisingly, most of them felt unrelated one to another.
This made it rather difficult to understand where improvement vectors could be found.
In Mitaka), while also displaying more simplicity in their description.
On a more foundational aspect, we gave a new exact expansion of the so-called smoothing parameter of a lattice.
This important quantity is used in many security arguments, where the finer the estimate, the better control over security one gets.
We believe that this new expression has application in more mathematical aspects of lattice theory, such as improved transference bounds.
The Module Learning With Errors problem (M-LWE) is a core computational assumption of lattice-based cryptography which offers an interesting trade-off between guaranteed security and concrete efficiency. The problem is parameterized by a secret distribution as well as an error distribution. There is a gap between the choices of those distributions for theoretical hardness results (standard formulation of M-LWE, i.e., uniform secret modulo
Digital signature is an essential primitive in cryptography, which can be used as the digital analogue of handwritten signatures but also as a building block for more complex systems. In the latter case, signatures with specific features are needed, so as to smoothly interact with the other components of the systems, such as zero-knowledge proofs. This has given rise to so-called signatures with efficient protocols, a versatile tool that has been used in countless applications. Designing such signatures is however quite difficult, in particular if one wishes to withstand quantum computing. We are indeed aware of only one post-quantum construction, proposed by Libert et al. at Asiacrypt’16, yielding very large signatures and proofs.
In 17, we propose a new construction that can be instantiated both in standard lattices and structured ones, resulting in each case in dramatic performance improvements. In particular, the size of a proof of message-signature possession, which is one of the main metrics for such schemes, can be brought down to less than 650 KB. As our construction retains all the features expected from signatures with efficient protocols, it can be used as a drop-in replacement in all systems using them, which mechanically improves their own performance, and has thus a direct impact on many applications. It can also be used to easily design new privacy-preserving mechanisms. As an example, we provide the first lattice-based anonymous credentials system.
Practical implementations of advanced lattice-based constructions have received much attention since the first practical identity-based encryption scheme instantiated over NTRU-lattices, proposed by Prest et al. (Asiacrypt 2014). This particular design uses powerful lattice-based building blocks to allow efficient Gaussian preimage sampling and trapdoor generation. In 16, we propose two different constructions and implementations of identity-based encryption schemes (IBE) using Chen et al. (Asiacrypt 2019)approximate variants of “gadget-based” trapdoors. Both constructions are proven secure.
Our first IBE scheme is an adaptation of the Bert et al. scheme (PQCrypto 2021) to the approximate setting, relying on the Module-NTRU hardness assumption and making use of the Micciancio-Peikert paradigm for approximate trapdoors. The second IBE relies on a variant of the NTRU hardness assumption.
We provide several timings and a comparison analysis to explain our results. The two different instantiations give interesting trade-offs in terms of security and efficiency and both benefit from the use of approximate trapdoors. Though our second IBE construction is less efficient than other NTRU-based IBEs, we believe our work provides useful insights into efficient advanced lattice-based constructions.
In 15, we present a new generic transform that takes a multi-round interactive proof for the membership of a language L and outputs a non-interactive zero-knowledge proof (not of knowledge) in the common reference string model. Similar to the Fiat-Shamir transform, it requires a hash function H. However, in our transform the zero-knowledge property is in the standard model, and the adaptive soundness is in the non-programmable random oracle model (NPROM). Behind this new generic transform, we build a new generic OR-composition of two multi-round interactive proofs. Note that the two common techniques for building OR-proofs (parallel OR-proof and sequential OR-proof) cannot be naturally extended to the multi-round setting. We also give a proof of security for our OR-proof in the quantum oracle model (QROM), surprisingly the security loss in QROM is independent from the number of rounds.
In 21, we study new factorization algorithms. The Number Field Sieve (NFS) is the state-of-the art algorithm for integer factoring, and sieving is its most crucial step. It is a very time-consuming operation, aiming at collecting many relations. The ultimate goal is to generate random smooth integers mod N together with their prime decomposition, where smooth is defined on the rational and algebraic sides according to two prime factor bases.
In modern factorization tools, such as Cado-NFS, sieving is split into different stages depending on the size of the primes, but defining good parameters for all stages is based on heuristic and practical arguments. At the beginning, candidates are sieved by small primes on both sides, and if
they pass the test, they continue to the next stages with bigger primes, up to the final one where the remaining part is factored using the ECM algorithm. On the one hand, first stages are fast but many false relations pass them, and we
spend a lot of time with useless relations. On the other hand final stages are more time demanding but outputs less relations. It is not easy to evaluate the performance of the best strategy on the overall sieving step since it depends
on the distribution of numbers that results at each stage.
In 21, we examine different sieving strategies to speed-up this step, since many improvements have been done on all other steps of the NFS. Based on the relations collected during the record RSA-250 factorization and all its parameters, we study the many different strategies that have been defined for NFS. Our result is an experimental evaluation of them.
In
In 8 we develop a new software attack on the WPA3. It is universally acknowledged that Wi-Fi communications are important to secure. Thus, the Wi-Fi Alliance published WPA3 in 2018 with a distinctive security feature: it leverages a Password-Authenticated Key Exchange (PAKE) protocol to protect users' passwords from offline dictionary attacks. Unfortunately, soon after its release, several attacks were reported against its implementations, in response to which the protocol was updated in a best-effort manner. In this paper, we show that the proposed mitigations are not enough, especially for a complex protocol to implement even for savvy developers. Indeed, we present Dragondoom, a collection of side-channel vulnerabilities of varying strength allowing attackers to recover users' passwords in widely deployed Wi-Fi daemons, such as hostap in its default settings. Our findings target both password conversion methods, namely the default probabilistic hunting-and-pecking and its newly standardized deterministic alternative based on SSWU. We successfully exploit our leakage in practice through microarchitectural mechanisms, and overcome the limited spatial resolution of Flush+Reload. Our attacks outperform previous works in terms of required measurements. Then, driven by the need to end the spiral of patch-and hack in Dragonfly implementations, we propose Dragonstar, an implementation of Dragonfly leveraging a formally verified implementation of the underlying mathematical operations, thereby removing all the related leakage vector. Our implementation relies on HACL*, a formally verified crypto library guaranteeing secret-independence. We design Dragonstar, so that its integration within hostap requires minimal modifications to the existing project. Our experiments show that the performance of HACL*-based hostap is comparable to OpenSSL-based, implying that Dragonstar is both efficient and proved to be leakage-free.
In 18, we study the security of Digital Rights Management Systems. Thanks to HTML5, users can now view videos on Web browsers without installing plug-ins or relying on specific devices. In 2017, W3C published Encrypted Media Extensions (EME) as the first official Web standard for Digital Rights Management (DRM), with the overarching goal of allowing seamless integration of DRM systems on browsers. EME has prompted numerous voices of dissent with respect to the inadequate protection of users. Of particular interest, privacy concerns were articulated, especially that DRM systems inherently require uniquely identifying information on users' devices to control content distribution better. Despite this anecdotal evidence, we lack a comprehensive overview of how browsers have supported EME in practice and what privacy implications are caused by their implementations. In this paper, we fill this gap by investigating privacy leakage caused by EME relying on proprietary and closed-source DRM systems. We focus on Google Widevine because of its versatility and wide adoption. We conduct empirical experiments to show that browsers diverge when complying EME privacy guidelines, which might undermine users' privacy. For instance, we find that many browsers gladly give away the identifying Widevine Client ID with no or little explicit consent from users. Moreover, we characterize the privacy risks of users tracking when browsers miss applying EME guidelines regarding privacy. Because of being closed-source, our work involves reverse engineering to dissect the contents of EME messages as instantiated by Widevine. Finally, we implement EME Track, a tool that automatically exploits bad Widevine-based implementations to break privacy.
KDDI: (T0: 11/2022 –> 02/2023)
Lead by University of Rennes.
KDDI (Japan) would like to propose the Rocca-S encryption scheme to some international standardization process. However, such organization require an external evaluation provided by an independent third parties. KDDI contacted us to perform this analysis. Some outputs of this work are currently under review.
Resque: (T0: 09/2022 –> 08/2026)
BPi France project.
Lead by Thales.
Participating entities on the industrial side: Thales SIX and DIS, TheGreenBow, CryptoExperts, CryptoNext. Participating entities on the public side: Inria, ANSSI.
In this project, Inria is represented by two teams: Capsule (Inria Rennes), with Pierre-Alain Fouque as the coordinator; and Cascade (Inria Paris), with Céline Chevalier as collaborator.
Resque project, "Résilience Quantique" aims at combining two use-cases allowing to construct two software and hardware components: i) VPN [virtual private network] hybrid and agile and a HSM [hardware security module] robust and efficient, providing the security of exchanged information. The cryptographic agility will allow to perform regular and continuous update of the post-quantum algorithms.
Hyperform: (T0: 09/2022 –> 08/2026)
BPi France project.
Lead by Idemia.
Participating entities on the industrial side: Idemia, Atempo, PrimX, CryptoNext, Sinacktiv. Participating entities on the public side: Inria, ANSSI, CEA.
In this project, Inria is represented by two teams: Grace (Inria Saclay), with Ben Smith as the coordinator; and Capsule (Inria Rennes), with Alexandre Wallet as collaborator.
Hyperform aims at being an international leading force in the development of quantum-resilient secure elements for embedded systems, as well as a primary actor in the design of hybrid solutions at scale, that is, mixing pre- and post-quantum cryptography in a provably secure way, formally verified, into industrial products. One essential goal of the project is to produce a demonstrator: an secure element with dedicated hardware/software embedding post-quantum cryptographic algorithms, providing a level of resilience against side-channel attackers while maintaing a high level of performances on par with the demands of real-world situations.
The PQTLS (01/2022 –> 12/27)
Post-quantum padlock for web browser
PEPR Quantique
Partners: GREYC (Caen), ENS Lyon, Inria GRACE, Inria Cosmiq, Inria Prosecco, Inria Caramba, Inria Lfant, Inria Capsule, UVSQ, Cryptis, ARCAD, SESAM, CEA LETI, University of Rouen, Rennes, Bordeaux.
The famous "padlock" appearing in browsers when one visits websites whose address is preceded by "https" relies on cryptographic primitives that would not withstand a quantum computer. This integrated project aims to develop in 5 years post-quantum primitives in a prototype of "post-quantum lock" that will be implemented in an open source browser. The evolution of cryptographic standards has already started, the choice of new primitives will be made quickly, and the transition will be made in the next few years. The objective is to play a driving role in this evolution and to make sure that the French actors of post-quantum cryptography, already strongly involved, are able to influence the cryptographic standards of the decades to come.
Cryptanalyse (12/2023 –> 12/28)
PEPR Cybersécurité
Partners: Inria GRACE, Inria Cosmiq, Almasty, Inria Caramba, Inria Lfant, Inria Capsule, Crypto, Eco, Canari, UGA.
The Cryptanalyse project focuses on the study and standardization of cryptographic primitives. Modern cryptography has become an indispensable tool for securing personal, commercial and institutional communications. This project will provide an estimate of the difficulties involved in solving the underlying problems, and deduce the level of security conferred by the use of these primitives. The aim is to evaluate the security of cryptographic algorithms.
ANR AMIRAL (01/2022 –> 12/2024)
Digital signatures from lattice-based assumptions
ANR ASTRID, Appel 2021
Partners: GREYC (Caen), Inria Lyon
The focus of AMIRAL is the improvement of lattice-based digital signatures schemes at large. More precisely, three research axes are considered. First, we will design concrete improvements and novel tweaks for the optimization of NIST's selected candidates (Falcon and Dillithium) or to extend their usecases to a larger surface of scenarios. Second is the conception and study of signatures with advanced properties (such as: aggregated, threshold, ...) in order to substantially improve the state-of-the-art. Third, the study of the interplay between the improvements in the design of signatures and the efficiency of broader, more complex cryptographic primitives such as attribute-based encryption.
CROWD (2023 –> 2027).
Code-based practical cryptography
ANR-DFG
Partners: TU Munich, IRMAR (Rennes), Inria (Rennes)
The aim of this project is the examination of skew metrics and their application in cryptography. These metrics can be considered as a generalization of the so-called rank metric, which has significant applications in coding theory, cryptography, data storage, and network coding. The connection of these metrics lies in the non-commutativity of Euclidean rings, called Ore rings, which extend the classical notation of commutative polynomial rings by 'skewing' (twisting) multiplication. These operations allow the development of metrics and new codes with efficient arithmetic operations. This holds promise for secure and efficient cryptographic implementations. Three avenues are explored: 1) investigates the foundations of algebraic codes in these skew-metrics; 2) design novel decoding algorithms and cryptographic schemes from these codes, and assess their security from a cryptanalytic and side-channel point of view; 3) produce practically efficient implementation of core cryptographic primitive, such as digital signatures, with the goal of entering the next turn of the NIST standardization.
ANR IDROMEL (2021 –> 2025)
Improving the Design of secure systems by a Reduction Of Micro-architectural Effects on side-channeL Attacks
Partners: LAAS-CNRS, LIP6, CEA, ARM, IRISA
The IDROMEL project aims to contribute to the design of secure systems against side-channel attacks based on power and electromagnetic observations, for a wide range of computing systems (from IoT devices to mobile phones). IDROMEL will investigate the impact of the processor micro-architecture on power and electromagnetic side-channel attacks as a key concern for the design of secure systems. IDROMEL will produce:
The PQTLS, project of the PEPR Quantique, has organized a workshop at the École normale supérieure on the security of recently submitted post-quantum signature in June 2023, and another workshop with the French companies that are developing a quantum computer in June 2023 at the Cyber Campus in Paris.
We evaluated research projects for many research funding agencies.
Pierre-Alain Fouque is the scientific coordinator of the PEPR project PQTLS.