Grace combines expertise and deep knowledge in algorithmic number theory and algebraic geometry, to build and analyse (public-key) cryptosystems, design new error correcting codes, with real-world concerns like cybersecurity or blockchains (software and hardware implementations, secure implementations in constrained environments, countermeasures against side channel attacks, white box cryptography).
The foundations of Grace therefore lie in algorithmic number theory (fundamental algorithms primality, factorization), number fields, the arithmetic geometry of curves, algebraic geometry and the theory of algebraic codes.
Arithmetic Geometry is the meeting point of algebraic geometry and number theory: the study of geometric objects defined over arithmetic number systems. In our case, the most important objects are curves and their Jacobians over finite fields; these are fundamental to our applications in both coding theory and cryptology. Jacobians of curves are excellent candidates for cryptographic groups when constructing efficient instances of public-key cryptosystems, of which Diffie–Hellman key exchange is an instructive example.
Coding Theory studies originated with the idea of using redundancy in messages to protect them against noise and errors. While the last decade of the 20th century has seen the success of so-called iterative decoding methods, we see now many new ideas in the realm of algebraic coding, with the foremost example being list decoding, (zero knowledge or not) proofs of computation.
Part of the activities of the team are oriented towards post-quantum cryptography, either based on elliptic curves (isogenies) or code-based. Also the team study relevant cryptography for the blockchain arena.
The group is strongly invested in cybersecurity: software security, secure hardware implementations, privacy, etc.
Algorithmic Number Theory is concerned with replacing special cases with general algorithms to solve problems in number theory. In the Grace project, it appears in three main threads:
Clearly, we use computer algebra in many ways. Research in cryptology
has motivated a renewed interest in Algorithmic Number Theory in
recent decades—but the fundamental problems still exist per
se. Indeed, while algorithmic number theory application in
cryptanalysis is epitomized by applying factorization to breaking RSA
public key, many other problems, are relevant to various area of
computer science. Roughly speaking, the problems of the cryptological
world are of bounded size, whereas Algorithmic Number Theory is also
concerned with asymptotic results.
Theme: Arithmetic Geometry: Curves and their Jacobians
Arithmetic Geometry is the meeting point of algebraic geometry and
number theory: that is, the study of geometric objects defined over
arithmetic number systems (such as the integers and finite fields).
The fundamental objects for our applications
in both coding theory and cryptology
are curves and their Jacobians over finite fields.
An algebraic plane curve
(Not every curve is planar—we may have more variables, and more
defining equations—but from an algorithmic point of view,
we can always reduce to the plane setting.)
The genusJacobian of
The simplest curves with nontrivial Jacobians are
curves of genus 1,
known as elliptic curves;
they are typically defined by equations of the form
Theme: Curve-Based Cryptology
Jacobians of curves are excellent candidates for cryptographic groups when constructing efficient instances of public-key cryptosystems. Diffie–Hellman key exchange is an instructive example.
Suppose Alice and Bob want to establish a secure communication
channel. Essentially, this means establishing a common secret
key, which they will then use for encryption and decryption.
Some decades ago, they would have exchanged this key in person, or
through some trusted intermediary; in the modern, networked world,
this is typically impossible, and in any case completely unscalable.
Alice and Bob may be anonymous parties who want to do e-business, for
example, in which case they cannot securely meet, and they have no way
to be sure of each other's identities. Diffie–Hellman key exchange
solves this problem. First, Alice and Bob publicly agree on a
cryptographic group
This simple protocol has been in use, with only minor modifications,
since the 1970s. The challenge is to create examples of groups
The classic example of a group suitable for the Diffie–Hellman protocol
is the multiplicative group of a finite field
This is where Jacobians of algebraic curves come into their own.
First, elliptic curves and Jacobians of genus 2 curves do not have a
subexponential index calculus algorithm: in particular, from the point
of view of the DLP, a generic elliptic curve is currently as
strong as a generic group of the same size. Second, they provide
some diversity: we have many degrees of freedom in choosing
curves over a fixed
Theme: Coding theory
Coding Theory studies originated with the idea of using redundancy in messages to protect against noise and errors. The last decade of the 20th century has seen the success of so-called iterative decoding methods, which enable us to get very close to the Shannon capacity. The capacity of a given channel is the best achievable transmission rate for reliable transmission. The consensus in the community is that this capacity is more easily reached with these iterative and probabilistic methods than with algebraic codes (such as Reed–Solomon codes).
However, algebraic coding is useful in settings other than the Shannon context. Indeed, the Shannon setting is a random case setting, and promises only a vanishing error probability. In contrast, the algebraic Hamming approach is a worst case approach: under combinatorial restrictions on the noise, the noise can be adversarial, with strictly zero errors.
These considerations are renewed by the topic of list decoding after the breakthrough of Guruswami and Sudan at the end of the nineties. List decoding relaxes the uniqueness requirement of decoding, allowing a small list of candidates to be returned instead of a single codeword. List decoding can reach a capacity close to the Shannon capacity, with zero failure, with small lists, in the adversarial case. The method of Guruswami and Sudan enabled list decoding of most of the main algebraic codes: Reed–Solomon codes and Algebraic–Geometry (AG) codes and new related constructions “capacity-achieving list decodable codes”. These results open the way to applications against adversarial channels, which correspond to worst case settings in the classical computer science language.
Another avenue of our studies is AG codes over various geometric objects. Although Reed–Solomon codes are the best possible codes for a given alphabet, they are very limited in their length, which cannot exceed the size of the alphabet. AG codes circumvent this limitation, using the theory of algebraic curves over finite fields to construct long codes over a fixed alphabet. The striking result of Tsfasman–Vladut–Zink showed that codes better than random codes can be built this way, for medium to large alphabets. Disregarding the asymptotic aspects and considering only finite length, AG codes can be used either for longer codes with the same alphabet, or for codes with the same length with a smaller alphabet (and thus faster underlying arithmetic).
From a broader point of view, wherever Reed–Solomon codes are used, we can substitute AG codes with some benefits: either beating random constructions, or beating Reed–Solomon codes which are of bounded length for a given alphabet.
Another area of Algebraic Coding Theory with which we are more recently concerned is the one of Locally Decodable Codes. After having been first theoretically introduced, those codes now begin to find practical applications, most notably in cloud-based remote storage systems.
Theme: Cryptography
A huge amount of work is being put into developing an efficient quantum computer. But even if the advent of such a computer may wait for decades, it is urgent to deploy post-quantum cryptography (PQC), i.e: solutions on our current devices that are quantum-safe. Indeed, an attacker could store encrypted sessions and wait until a quantum computer is available to decrypt.
In this context the National Institute of Standard Technology (NIST) has launched in 2017 (see this website) a call for standardizing public-key PQC schemes (key exchanges and signatures). Among the mathematical
objects to design post quantum primives, one finds error
correcting codes, Euclidean lattices and isogenies. Furthermore, in order to increase the diversity in the future post-quantum standardized crypto-systems the NIST has launched in 2023 (see this website) a second call for standardization.
We are currently in the final step of the standardization of the NIST and most of the selected solutions are based on codes and lattices. These preliminary results tend to show that codes and lattices will be in a near future at the ground of our numerical security. If isogenies are less represented, they remain of deep interest since they appear to be the post quantum solution providing the smallest key sizes. The purpose of our research program is to bring closer these solutions for a post-quantum security in order to improve their efficiency, diversity and to increase our trust in these propositions.
Proofs of computation are cryptographic protocols which allow a prover to convince a verifier that a statement or an output of a computation is correct. The prover is untrusted in the sense that it may try to convince the verifier that a false statement is true. On the other hand the prover is computationnally restricted, and have very small prower: the proof should be short and easy to verify. They can be interactive or not.
While the topic originates back to 1990, several important steps towards praticality has been made in last decade, with efficient, real-life implementations and industrial deployments in the last years, thanks to huge fundings.
There are several cryptographic paths for designing such proof
systems. Within Grace, two main techniques are investigated. The
first one relies on elliptic curves and pairings, and produce
very short (constant-size) proofs.
We are interested in developing some interactions between cryptography and cybersecurity. In particular, we develop some researches in embedded security (side channels and fault attack), software security (finding vulnerability efficiently) and privacy (security of TOR).
While basic and standard blockchain ideas rely, on the cryptographic side, on very basic and standard cryptographic primitives like signatures and hash functions, more elaborate techniques from crypto can alleviate some shortcomings of blockchain, like the poor bandwith and the lack of privacy.
The topic of verifiable computation consists in verifying heavy computations done by a remote computer, using a lightweight computer which is not able to do the computation. The remore computer, called the prover, is allowed to provide a proof aside the result of the computation. This proof must be very short and fast to verify. It can also be made zero-knowledge, where the prover hides some inputs to the computation, and yet prove the result is correct.
These proofs allows to move data and computation off chain, pushing
the burden to off-chain servers that play the role of provers, who
then commit short commitments of the updated data , accompanied by
short proofs which are easy to verify onchain, where validators play
the role of verifiers. This mecanism is called a rollup and is
at the core of the proposed path for scaling Ethereum, a predominant
blockchain, which will be “rollup-centric”.
Also Daniel Augot, together with Julien Prat (economist, ENSAE), is
co-leading a Polytechnique teaching and research “chair”, called
Blockchain and B2B
plaforms, funded by CapGemini, Caisse des dépots and
NomadicLabs. This is patronage, which funded Sarah Bordage's PhD
thesis. This gives visiblity and outreach beyond the academic
sphere.
The team is concerned with several aspect of reliability and security of cloud storage, obtained mainly with tools from coding theory. On the privacy side, we build protocols for so-called Private Information Retrieval which enable a user to query a remote database for an entry, while not revealing his query. For instance, a user could query a service for stock quotes without revealing with company he is interested in. On the availability side, we study protocols for proofs of retrievability, which enable a user to get assurance that a huge file is still available on a remote server, with a low bandwith protocol which does not require to download the whole file. For instance, in a peer-to-peer distributed storage system, where nodes could be rewarded for storing data, they can be audited with proof of retrievability protocols to make sure they indeed hold the data.
We investigate these problems with algebraic coding theory for the effective constuction of protocols. To this respect, we mainly use locally decodable codes and in particular high-rate lifted codes.
Maxime Roméas is a PhD student of the team. (PhD grant from IP Paris/Ecole Polytechnique for a 3-year doctorate, Oct 2019-Sept 2022). The subject of his thesis is "The Constructive Cryptography paradigm applied to Interactive Cryptographic Proofs".
The Constructive Cryptography framework, introduced by Maurer in 2011, redefines basic cryptographic primitives and protocols starting from discrete systems of three types (resources, converters, and distinguishers). This not only permits to construct them effectively, but also lighten and sharpen their security proofs. One strength of this model is its composability. The purpose of the PhD is to apply this model to rephrase existing interactive cryptographic proofs so as to assert their genuine security, as well as to design new proofs. The main concern here is security and privacy in Distributed Storage settings. Another axis of the PhD is to augment the CC model by, e.g., introducing new functionalities to a so-called Server Memory Resource.
Wave is a post-quantum signature scheme
based on hard problems in coding theory,
originally proposed by
Benjamin Smith defended his Habilitation à diriger les recherches entitled: Advances in asymmetric cryptographic algorithms on October 6, 2023.
The security of most code–based cryptosystem relies on the hardness
of the so–called Decoding Problem. If its search version (Given a
random linear code, and a noisy codeword, it should be hard to decode,
i.e. to remove the error and recover the original message) is
quite well understood, many proposals actually rely on the decision version which can be formulated as follows: Given a random
linear code it should be hard to distinguish between a uniformly
random vector of the ambiant space, and a noisy codeword. This
decision version can be thought as the code–based analogue of the
Decisional Diffie Hellman problem, and for general random linear codes
both search and decision problem are known to be equivalent. Such a
result is known as a search to decision reduction. However, for
efficiency purposes, it is very appealing to use algebraically
structured codes such as quasi-cyclic codes, that can be represented
more compactly. In this situation, the hardness of the decision
Decoding problem is only conjectured. On the other hand, one of the
reasons of the success of lattice–based cryptography is that it
benefits from a rich literature of security reductions for both
general lattices and so–called structured lattices, i.e.
lattices arising from orders of number fields.
In 44, based on a strong analogy between
number fields and function fields, and especially using Carlitz
modules which can be somehow considered as an analogue of cyclotomic
number fields in positive characteristics, we introduce a new generic
problem that we call Function Field Decoding Problem, and
derive the first search to decision reduction in this context.
In 19, we revisit the tool called the OHCP framework (for Oracle with Hidden Center Problem) which has been introduced by Peikert et al. (STOC 2017) in the lattice-based context. This framework has proved to be very useful as a black box inside reductions and we have adapted it in the code-based context by extracting its very essence, namely the Oracle Comparison Problem (OCP). It has yield to a new worst-case to average-case search-to-decision reduction. We have then turned to the structured versions and explain why this is not as straightforward as for Euclidean lattices. If we fail to give a search-to-decision reduction for structured codes, we believe that our work opens the way towards new reductions for structured codes, given that the OHCP framework proved to be so powerful in lattice-based cryptography. Furthermore, we also believe that this technique could be extended to codes endowed with other metrics, such as the rank metric, for which no reduction is known.
The security of code-based cryptography relies primarily on the hardness of generic decoding with linear codes. The best generic decoding algorithms are all improvements of an old algorithm due to Prange: they are known under the name of information set decoders (ISD). A while ago, a generic decoding algorithm which does not belong to this family was proposed: statistical decoding. It is a randomized algorithm that requires the computation of a large set of parity-checks of moderate weight, and uses some kind of majority voting on these equations to recover the error we are looking for in the decoding problem. This algorithm was long forgotten because even the best variants of it performed poorly when compared to the simplest ISD algorithm. In 46, we revisit this old algorithm by using parity-check equations in a more general way. Here the parity-checks are used to get LPN samples with a secret which is part of the error and the LPN noise is related to the weight of the parity-checks we produce. The corresponding LPN problem is then solved by standard Fourier techniques. By properly choosing the method of producing these low weight equations and the size of the LPN problem, we are able to outperform in this way significantly information set decoders at code rates smaller than 0.3. It gives for the first time after 60 years, a better decoding algorithm for a significant range which does not belong to the ISD family.
In 31 we revisit RLPN-decoding by noticing that, in this algorithm, decoding is in fact reduced to a sparse-LPN problem, namely with a secret whose Hamming weight is small. Our new approach consists this time in making an additional reduction from sparse-LPN to plain-LPN with a coding approach inspired by coded-BKW. It outperforms significantly the ISD's and RLPN for code rates smaller than 0.42. This algorithm can be viewed as the code-based cryptography cousin of recent dual attacks in lattice-based cryptography. We depart completely from the traditional analysis of this kind of algorithm which uses a certain number of independence assumptions that have been strongly questioned recently in the latter domain. We give instead a formula for the LPN noise relying on duality which allows to analyze the behavior of the algorithm by relying only on the analysis of a certain weight distribution. By using only a minimal assumption whose validity has been verified experimentally we are able to justify the correctness of our algorithm. This key tool, namely the duality formula, can be readily adapted to the lattice setting and is shown to give a simple explanation for some phenomena observed on dual attacks in lattices.
In 23, we propose a new attack on rank metric based encryption schemes by revisiting and extending Overbeck's attack. This novel approach involving the computation of code's stabilizer algebras and their Artin-Wedderburn decomposition. This permitted in particular to break the system proposed in 48.
In another work 22, we proposed a new distinguisher on binary and
Consider the basic problem of efficiently evaluating an isogeny
Secure Multiparty Computation is a famous paradigm where each player has secret data and are able to perform a computation involving all these secret data without getting more information than the result of the computation.
Following the seminal work from Beaver 42,
efficient secure multi party computation can be performed thanks to a precomputation step where the parties receive correlated pseudo-random strings called Oblivious Linear Evaluation (OLE). In 18, we proposed a new efficient construction of OLE's which security rests on a new problem called Quasi-Abelian Syndrome Decoding. This new construction permits to construct very long pseudo-random correlated strings over small fields while the best construction up to now required to work over a very large field.
Suppose a user of a small device requires a powerful computer to perform a heavy computation for him. The computation can not be performed by the device. After completion of the computation, the powerful computer reports a result. Suppose now that the user has not full confidence that the remote computer performs correctly or behaves honestly. How can the user be assured that the correct result has been returned to him, given that he can not redo the computation ?
The topic of verifiable computation deals with this issue. Essentially it is a cryptographic protocol where the prover (i.e. the remote computer) provides a proof to a waek verifier (i.e. the user) that a computation is correct. The protocol may be interactive, in which case there may be one or more rounds of interactions between the prover and the verifier, or non interactive, in which case the prover sends a proof that the computation is correct.
These protocols incorporate zero-knowledge variants, where the scenario is different. A service performs a computation on date, part of which remaining private (for instance statistics on citizen's incomes). It is possible for the service to prove the correctness of the result without revealing the data (which has to be committed anyway).
Two directions for building these protocols are discrete logarithms (and pairings) in elliptic curves or a coding theoretical setting (originating to the PCP theorem). Both variants admit a zero-knowledge version, and the core of the research is more on provable computation than the zero-knowledge aspect, which comes rather easily in comparison.
In the coding theoretic setting, these protocols are made popular, in
particular in the blockchain area, under the name of (ZK-)STARKS,
Scalable Transparent Arguments of Knowledge, introduced in
2018. The short non interactive proofs are derived for protocols which
are called IOPs Interactive Oracle Proofs, which are
combination of IPs Interactive Proofs and PCPs
Probabilistically Checkable Proofs, for combining the best of
both worlds, and making PCPs pratical.
At the core of these protocols lies the following coding problem: how to decide, with high confidence, that a very long ambient word is close to a given code, while looking at very few coordinates of it.
An important issue is to have a smaller alphabet, and this can be done using algebraic-geometric codes. This was done by Sarah Bordage, Matthieu Lhotel, Jade Nardi and Hugues Randriambololona 45, using curves with a resoluble automorphims group, which enable to build codes which are foldable in way similar to the Reed-Solomon codes with are folded in the "FRI" protocol 43. Their protocol has very good perfomance, akin to the Reed-Solomon case. Towers of curves are considered for this construction, to enable good asymptotic results.
The internship of Tanguy Medevielle in Spring 2023 allowed to prove that these codes admit a quasi linear encoding algorithm under mild constraints.
In collaboration with Matthieu Rambaud (Télécom Paris), Daniel Augot is advising Angelo Saadeh. The issue which is adressed is the following. Two parties each hold privately some distinct slices of common data. compute a logistic regression on the whole set of data, without each party revealing its data to the other party.
Computing a common output from inputs of several participants in the above is done in cryptography using MPC Secure Multiparty Computation, as introduced by Yao 49, and made recently practical, with several implementations. Yet, as classically observed in MPC, the actual result, when learned, may leak information about the secret inputs. The same problem occurs here, where the model may leak information about the data.
Thus it is natural to investigate the use of
During the period, we investigated the Updatable Encryption (UE) functionality. UE protocols allow a client, who outsourced his encrypted data, to make it updated by an untrusted server. To do so, the client generates a token, dependent on the old key and the new key, and gives it to the server to update all ciphertexts (ciphertext-independent setting); of course, the token should not reveal anything about the keys. An issue here, besides security, is maintaining the communication complexity as low as possible. The security definitions for UE schemes have been constantly updated since the seminal paper of D. Boneh in 2013. However, the security notion that is best suited for a paricular application remains unclear. We solved the problem in the ciphertext-independent setting. In particular, we proved that IND-UE-RCCA security1 is the right notion for many practical UE schemes. As a consequence, we notably rectify a previously believed assertion, according to which IND-UE security is stronger than IND-ENC+UPD notions, in that it hides the ages of ciphertexts. We show that this is true only when ciphertexts can leak at most once per epoch (an epoch being a time period between to key updates). We also give a clear description of post-compromise security guarantees of such schemes. This work has been submitted to Journal of Cryptology.
Basic isogeny computations require the use of modular polynomials in two ways. The roots of a modular polynomial first indicate the existence of curves isogenous to the curve of interest. Second, these isogenous curves are computed using explicit formulas involving derivatives of the modular polynomial, as first described by Atkin for two families of modular polynomials. The height of the polynomial is critical, since it is the dominant parameter in the complexity analysis of the various methods used to compute them. We started to investigate the theory and practice of modular polynomials, in search of smaller and easier families to be used for the efficient computation of isogenies. See the two preprints 35 and 36.
Shane Gibbons, PhD student from CWI Amsterdam Visitedthe team for one month in march 2023 in order to work on code and lattices equivalence problems.
ENCODE project on cordis.europa.eu
QSNP is a European Quantum Flagship project that aims to develop quantum cryptography technology to secure the transmission of information over the internet.
QSNP will contribute to the European sovereignty in quantum technology for cybersecurity protecting the privacy and the sensitive information of European citizens transmitted over the internet.
ANR CIAO
(Cryptography, Isogenies, and Abelian varieties Overwhelming)
is a JCJC 2019 project, led by Damien Robert (Inria EP LFANT).
This project, which started in October 2019,
will examine applications of higher-dimensional abelian
varieties in isogeny-based cryptography.
ANR COLA (An interface between COde and LAttice-based cryptography) is a project
from (Appel à projets générique, Défi 9, Liberté et sécurité
de l’Europe, de ses citoyens et de ses résidents, Axe 4 ;
Cybersécurité). This project (ANR JCJC), starting in october 2021 led by
Thomas Debris-Alazard focusses on bringing closer post-quantum solutions based on codes and lattices to improve our trust in cryptanalysis and to open new perspectives in terms of design.
BARRACUDA is a collaborative ANR project accepted in 2021 and led by
Website : barracuda.inria.fr
The project gathers specialists of coding and cryptology on one hand and specialists of number theory and algebraic geometry on the other hand. The objectives concern problems arising from modern cryptography which require the use of advanced algebra based objects and techniques. It concerns for instance mathematical problems with applications to distributed storage, multi-party computation or zero knowledge proofs for protocols.
SANGRIA is a collaborative ANR project accepted in 2021.
Website : lip6.fr/Damien.Vergnaud/projects/sangria/
The main scientific challenge of the SANGRIA (Secure distributed computAtioN - cryptoGRaphy, combinatorIcs and computer Algebra) project are (1) to construct specific protocols that take into account practical constraints and prove them secure, (2) to implement them and to improve the efficiency of existing protocols significantly. The SANGRIA project (for Secure distributed computAtioN: cryptoGRaphy, combinatorIcs and computer Algebra) aims to undertake research in these two aspects while combining research from cryptography, combinatorics and computer algebra. It is expected to impact central problems in secure distributed computation, while enriching the general landscape of cryptography.
MobiS5 is a collaborative ANR project accepted in 2018.
Website : mobis5.limos.fr/
MobiS5 will aim to foresee and counter the threats posed in 5G architectures by the architectural modifications suggested in TR 22.861-22.864. Concretely, we will provide a provably-secure cryptographic toolbox for 5G networks, validated formally and experimentally, responding to the needs of 5G architectures at three levels:
* Challenge 1: security in the network infrastructure and end points: including core network security and attack detection and prevention; * Challenge 2: cryptographic primitives and protocols, notably : a selection of basic primitives, an authenticated key-exchange protocol, tools to compute on encrypted data, and post-quantum cryptographic countermeasures * Challenge 3: mobile applications, specifically in the use-case of a secure server that aids or processes outsourced computation; and the example of a smart home.
CryptiQ is a collaborative ANR project accepted in 2018.
The goal of the CryptiQ project is to major changes due to Quantum Computing by considering three plausible scenarios, from the closest to the furthest foreseeable future, depending on the means of the adversary and the honest parties. In the first scenario, the honest execution of protocols remains classical while the adversary may have oracle access to a quantum computer. This is the so-called post-quantum cryptography, which is the best known setting. In the second scenario (quantum-enhanced classical cryptography), we allow honest parties to have access to quantum technologies in order to achieve enhanced properties, but we restrict this access to those quantum technologies that are currently available (or that can be built in near-term). The adversary is still allowed to use any quantum technology. Finally, in the third scenario (cryptography in a quantum world), we allow the most general quantum operations to an adversary and we consider that anybody can now have access to both quantum communication and computation.
This projet intégré aims to develop post quantum cryptographic primitives in 5 years which would be implemented in an open source web browser.
The evolution of cryptographic standards has already begun. The choice of new primitives will be made soon and the transition should be operated in a few years. The objective of the project is to play a crucial role in this evolution
so that french researchers, which are already strongly implied in this process could influence the choice of cryptographic standards in the next years.
RIOT-fp is a research project on cyber-security targeting low-end, microcontroller-based IoT devices, on which run operating systems such as RIOT and a low-power network stack. It links the project-teams EVA, GRACE, PROSECCO, TRiBE, and TEA. Taking a global and practical approach, RIOT-fp gathers partners planning to enhance RIOT with an array of security mechanisms. The main challenges tackled by RIOT-fp are:
Beyond academic outcomes, the output of RIOT-fp is open source code published, maintained and integrated in the open source ecosystem around RIOT. As such, RIOT-fp strives to contribute usable building blocks for an open source IoT solution improving the typical functionality vs. risk tradeoff for end-users.
The Action Exploratoire CACHAÇA, led by