OURAGAN proposes to focus on the transfer of computational algebraic methods to some related fields (computational geometry, topology, number theory, etc.) and some carefully chosen application domains (robotics, control theory, evaluation of the security of cryptographic systems, etc.), which implies working equally on the use (modeling, know - how) and on the development of new algorithms. The latest breakthrough developments and applications where algebraic methods are currently decisive remain few and very targeted. We wish to contribute to increase the impact of these methods but also the number of domains where the use of computational algebraic methods represent a significant added value. This transfer-oriented positioning does not imply to stop working on the algorithms, it simply sets the priorities.
An original aspect of the OURAGAN proposal is to blend into an environment of fundamental mathematics, at the Institut de Mathématiques de Jussieu – Paris Rive Gauche (IMJ-PRG CNRS 7586), and to be cross-functional to several teams (Algebraic Analysis, Complex Analysis and Geometry, Number Theory to name only the main ones), which will be our first source of transfer of computational know-how. The success of this coupling allows to maintain a strong theoretical basis and to measure objectively our transfer activity in the direction of mathematicians (in geometry, topology, number theory, algebraic analysis, etc.) and to consolidate the presence of Inria in scientific areas among the most theoretical.
We propose three general directions with five particular targets:
These actions come, of course, in addition to the study and development of a common set of core elements of
This core activity is the invention and study of fundamental algebraic algorithms and objects that can be grouped into 2 categories: algorithms designed to operate on finite fields and algorithms running on fields of characteristic 0; with 2 types of computational strategies: the exactness and the use of approximate arithmetic (but with exact results). This mix also installs joint studies between the various axes and is an originality of the project-team. For example many kinds of arithmetic tools around algebraic numbers have to face to similar theoretical problems such as finding a good representation for a number field; almost all problems related to the resolution of algebraic systems will reduce to the study of varieties in small dimension and in particular, most of the time, to the effective computation of the topology of curves and surfaces, or the certified drawing of non-algebraic functions over an algebraic variety.
The tools and objects developed for research on algorithmic number theory as well as in computational geometry apply quite directly on some selected connected challenging subjects:
These applications will serve for the evaluation of the general tools we develop when used in a different context, in particular their capability to tackle state of the art problems.
The basic computable objects and algorithms we study, use, optimize or develop are among the most classical ones in computer algebra and are studied by many people around the world: they mainly focus on basic computer arithmetic, linear algebra, lattices, and both polynomial system and differential system solving.
In the context of OURAGAN, it is important to avoid reinventing the wheel and to re-use wherever possible existing objects and algorithms, not necessarily developed in our team so that the main effort is focused on finding good formulations/modelisations for an efficient use. Also, our approach for the development of basic computable objects and algorithms is application-driven and follows a simple strategy: use the existing tools in priority, develop missing tools when required and then optimize the critical operations.
First, for some selected problems, we do propose and develop general key algorithms (isolation of real roots of univariate polynomials, parametrisations of solutions of zero-dimensional polynomial systems, solutions of parametric equations, equidimensional decompositions, etc.) in order to complement the existing set computable objects developed and studied around the world (Gröbner bases, resultants 70, subresultants 92, critical-point methods 47, etc.) which are also deeply used in our developments. Second, for a selection of well-known problems, we propose different computational strategies (for example the use of approximate arithmetic to speed up LLL algorithm or root isolators, still certifying the final result). Last, we propose specialized variants of known algorithms optimized for a given problem (for example, dedicated solvers for degenerated bivariate polynomials to be used in the computation of the topology of plane curves).
In the activity of OURAGAN, many key objects or algorithms around the resolution of algebraic systems are developed or optimized within the team, such as the resolution of polynomials in one variable with real coefficients 11117, rational parameterizations of solutions of zero-dimensional systems with rational coefficients 5516 or discriminant varieties for solving systems depending on parameters 14, but we are also power users of existing software (mainly Sage 1, Maple 2, Pari-GP 3,Snappea 4) and libraries (mainly gmp 5, mpfr 6, flint 7, arb 8, etc.) to which we contribute when it makes sense.
For our studies in number theory and applications to the security of cryptographic systems, our team works on three categories of basic algorithms: discrete logarithm computations 106 (for example to make progress on the computation of class groups in number fields 93), network reductions by means of LLL variants 81 and, obviously, various computations in linear algebra, for example dedicated to almost sparse matrices 107.
For the algorithmic approach to algebraic analysis of functional equations 51109110, we developed the effective study of both module theory and homological algebra 141 over certain noncommutative polynomial rings of functional operators 4, of Stafford's famous theorems on the Weyl algebras 131, of the equidimensional decomposition of functional systems 128, etc.
Finally, we study effective methods in algebraic topology, with a view towards the computation of normal forms or bases, and the construction of small resolutions of various algebraic structures: monoids and groups, algebras and operads, categories and higher structures, etc. The construction methods can come from combinatorial group theory (rewriting, Garside structures), combinatorial algebra (Gröbner bases), or homological algebra (Koszul duality, Morse theory). We explore potential deep foundational connexions between these different points of view, to unify, generalise and improve them.
Many frontiers between computable objects, algorithms (above section), computational number theory and applications, especially in cryptography are porous. However, one can classify our work in computational number theory into two classes of studies: computational algebraic number theory and (rigorous) numerical computations in number theory.
Our work on rigorous numerical computations is somehow a transverse activity in Ouragan: floating point arithmetic is used in many basic algorithms we develop (root isolation, LLL) and is thus present in almost all our research directions. However there are specific developments that could be labelized Number Theory, in particular contributions to numerical evaluations of L-functions and Modular Forms Database9 a world wide collaborative project.
Our work in computational algebraic number theory is driven by the algorithmic improvement to solve presumably hard problems relevant to cryptography. The use of number-theoretic hard problems in cryptography dates back to the invention of public-key cryptography by Diffie and Hellman 77, where they proposed a first instantiation of their paradigm based on the discrete logarithm problem in prime fields. The invention of RSA 139, based on the hardness of factoring came as a second example. The introduction of discrete logarithms on elliptic curves 11245 only confirmed this trend.
These crypto-systems attracted a lot of interest on the problems of factoring and discrete log. Their study led to the invention of fascinating new algorithms that can solve the problems much faster than initially expected:
Since the invention of NFS in the 90’s, many optimizations of this algorithm have been performed. However, an algorithm with better complexity hasn’t been found for factoring and discrete logarithms in large characteristic.
While factorization and discrete logarithm problems have a long history in cryptography, the recent post-quantum cryptosystems introduce a new variety of presumably hard problems/objects/algorithms with cryptographic relevance: the shortest vector problem (SVP), the closest vector problem (CVP) or the computation of isogenies between elliptic curves, especially in the supersingular case.
Members of OURAGAN started working on the topic of discrete logarithms around 1998, with several computation records that were announced on the NMBRTHRY mailing list.
In large characteristic, especially for the case of prime fields, the best current method is the number field sieve (NFS) algorithm. In particular, they published the first NFS based record computation13.
Despite huge practical improvements, the prime field case algorithm hasn't really changed since that first record.
Around the same time, we also presented small characteristic computation record based on simplifications of the Function Field Sieve (FFS) algorithm 105.
In 2006, important changes occurred concerning the FFS and NFS algorithms, indeed, while the algorithms only covered the extreme case of constant characteristic and constant extension degree, two papers extended their ranges of applicability to all finite fields. At the same time, this permitted a big simplification of the FFS, removing the need for function fields.
Starting from 2012, new results appeared in small characteristic. Initially based on a simplification of the 2006 result, they quickly blossomed into the Frobenial representation methods, with quasi-polynomial time complexity 106, 94.
An interesting side-effect of this research was the need to revisit the key sizes of pairing-based cryptography. This type of cryptography is also a topic of interest for OURAGAN. In particular, it was introduced in 2000 12.
The computations of class groups in number fields have strong links with the computations of discrete logarithms or factorizations using the NFS (number field sieve) strategy which as the name suggests is based on the use of number fields. Roughly speaking, the NFS algorithm uses two number fields and the strategy consists in choosing number fields with small sized coefficients in their definition polynomials. On the contrary, in class group computations, there is a single number field, which is clearly a simplification, but this field is given as input by some fixed definition polynomial. Obviously, the degree of this polynomial as well as the size of its coefficients are both influencing the complexity of the computations so that finding other polynomials representing the same class group but with a better characterization (degree or coefficient's sizes) is a mathematical problem with direct practical consequences. We proposed a method to address the problem 93, but many issues remain open.
Computing generators of principal ideals of cyclotomic fields is also strongly related to the computation of class groups in number fields. Ideals in cyclotomic fields are used in a number of recent public-key cryptosystems. Among the difficult problems that ensure the safety of these systems, there is one that consists in finding a small generator, if it exists, of an ideal. The case of cyclotomic fields is considered 50.
There is a tradition of using computations and software to study and understand the topology of small dimensional manifolds, going back at least to Thurston's works (and before him, Riley's pioneering work). The underlying philosophy of these tools is to build combinatorial models of manifolds (for example, the torus is often described as a square with an identification of the sides). For dimensions 2, 3 and 4, this approach is relevant and effective. In the team OURAGAN, we focus on the dimension 3, where the manifolds are modelized by a finite number of tetrahedra with identification of the faces. The software SnapPy 10 implements this strategy 144 and is regularly used as a starting point in our work. Along the same philosophy of implementation, we can also cite Regina 11. A specific trait of SnapPy is that it focuses on hyperbolic structures on the 3-dimensional manifolds. This setting is the object of a huge amount of theoretical work that were used to speed up computations. For example, some Newton methods were implemented without certification for solving a system of equations, but the theoretical knowledge of the uniqueness of the solution made this implementation efficient enough for the target applications. In recent years, in part under the influence of our team 12, more attention has been given to certified computations (at least with an error control) and now this is implemented in SnapPy.
This philosophy (modelization of manifolds by quite simple combinatoric models to compute such complicated objects as representations of the fundamental group) was applied in a pioneering work of Falbel 8 when he begins to look for another type of geometry on 3-dimensional manifolds (called CR-spherical geometry). From a computational point of view, this change of objectives was a jump in the unknown: the theoretical justification for the computations were missing, and the number of variables of the systems were multiplied by four. So instead of a relatively small system that could be tackled by Newton methods and numerical approximations, we had to deal with/study (were in front of) relatively big systems (the smallest example being 8 variables of degree 6) with no a priori description of the solutions.
Still, the computable objects that appear from the theoretical study are very often outside the reach of automated computations and are to be handled case by case. A few experts around the world have been tackling this kind of computations (Dunfield, Goerner, Heusener, Porti, Tillman, Zickert) and the main current achievement is the Ptolemy module13 for SnapPy.
From these early computational needs, topology in small dimension has historically been the source of collaboration with the IMJ-PRG laboratory.
At the beginning, the goal was essentially to provide computational tools for finding geometric structures in triangulated 3-dimensional varieties.
Triangulated varieties can be topologically encoded by a collection of tetrahedra with gluing constraints (this can be called a triangulation or mesh, but it is not an approximation of the variety by simple structures, rather a combinatorial model).
Imposing a geometric structure on this combinatorial object defines a number of constraints
that we can translate into an algebraic system that we then have to solve to study
geometric structures of the initial variety, for example in
relying on solutions to study representations of the fundamental group of the variety.
For these studies, a large part of the computable objects or algorithms we develop are required, from the algorithms
for univariate polynomials to systems depending on parameters. It should be noted that most of the
computational work lies in the modeling of problems 497 that have strictly no
chance to be solved by blindly running the most powerful black boxes: we usually deal here
with systems that have 24 to 64 variables, depend on 4 to 8 parameters and with degrees
exceeding 10 in each variable. With an ANR 14 funding on the subject, the progress that we did 86 were (much) more significant than expected. In particular, we have introduced new computable objects with an immediate theoretical meaning (let us say rather with a theoretical link established with the usual objects of the domain), namely, the so-called deformation variety.
Knot theory is a wide area of mathematics. We are interested in polynomial representations of long knots, that is to say polynomial embeddings
A Chebyshev knot 114, is a polynomial knot parameterized by a Chebyshev curve
Our activity in knot theory is a bridge between our work in computational geometry (topology and drawing of real space curves) and our work on topology in small dimensions (varieties defined as a knot complement).
Two-bridge knots (or rational knots) are particularly studied because they are much easier to study. The first 26 knots (except
We made use of Chebyshev polynomials so as Fibonacci polynomials which are families of orthogonal polynomials. Considering the Alexander-Conway polynomials as continuant polynomials in the Fibonacci basis, we were able to give a partial answer to Hoste's conjecture on the roots of Alexander polynomials of alternating knots ( 116).
We study the lexicographic degree of the two-bridge knots, that is to say the minimal (multi)degree of a polynomial representation of a
The drawing of algebraic curves and surfaces is a critical action in OURAGAN since it is a key ingredient in numerous developments. For example, a certified plot of a discriminant variety could be the only admissible answer that can be proposed for engineering problems that need the resolution of parametric algebraic systems: this variety (and the connected components of its counter part) defines a partition of the parameter’s space in regions above which the solutions are numerically stable and topologically simple. Several directions have been explored since the last century, ranging from pure numerical computations to infallible exact ones, depending on the needs (global topology, local topology, simple drawing, etc.). For plane real algebraic curves, one can mention the cylindrical algebraic decomposition 69, grids methods (for ex. the marching square algorithm), subdivision methods, etc.
As mentioned above, we focus on curves and surfaces coming from the study of parametric systems. They mostly come from some elimination process, they highly (numerically) unstable (a small deformation of the coefficients might change a lot the topology of the curve) and we are mostly interested in getting qualitative information about their counter part in the parameter's space.
For this work, we are associated with the GAMBLE EPI (Inria Nancy Grand Est) with the aim of developing computational techniques for the study, plotting and topology. In this collaboration, Ouragan focuses on CAD-Like methods while Gamble develops numerical strategies (that could also apply on non algebraic curves). Ouragan's work involves the development of effective methods for the resolution of algebraic systems with 2 or 3 variables 55, 111, 56, 57 which are basic engines for computing the topology 123, 76 and/or plotting.
Systems of functional equations or simply functional systems are systems whose unknowns are functions, such as systems of ordinary or partial differential equations, of differential time-delay equations, of difference equations, of integro-differential equations, etc.
Numerical aspects of functional systems, especially differential systems, have been widely studied in applied mathematics due to the importance of numerical simulation issues.
Complementary approaches, based on algebraic methods, are usually upstream or help the numerical simulation of systems of functional systems. These methods also tackle a different range of questions and problems such as algebraic preconditioning, elimination and simplification, completion to formal integrability or involution, computation of integrability conditions and compatibility conditions, index reduction, reduction of variables, choice of adapted coordinate systems based on symmetries, computation of first integrals of motion, conservation laws and Lax pairs, Liouville integrability, study of the (asymptotic) behavior of solutions at a singularity, etc. Although not yet very popular in applied mathematics, these theories have lengthy been studied in fundamental mathematics and were developed by Lie, Cartan, Janet, Ritt, Kolchin, Spencer, etc. 101109110113138126.
Over the past years, certain of these algebraic approaches to functional systems have been investigated within an algorithmic viewpoint, mostly driven by applications to engineering sciences such as mathematical systems theory and control theory. We have played a role towards these effective developments, especially in the direction of an algorithmic approach to the so-called algebraic analysis109, 110, 51, a mathematical theory developed by the Japanese school of Sato, which studies linear differential systems by means of both algebraic and analytic methods. To develop an effective approach to algebraic analysis, we first have to make algorithmic standard results on rings of functional operators, module theory, homological algebra, algebraic geometry, sheaf theory, category theory, etc., and to implement them in computer algebra systems. Based on elimination theory (Gröbner or Janet bases 101, 68, 140, differential algebra 5382, Spencer's theory 126, etc.), in 4, 5, we have initiated such a computational algebraic analysis approach for general classes of functional systems (and not only for holonomic systems as done in the literature of computer algebra 68). Based on the effective aspects to algebraic analysis approach, the parametrizability problem 4, the reduction and (Serre) decomposition problems 5, the
equidimensional decomposition 128, Stafford's famous theorems for the Weyl algebras 131, etc., have been studied and solutions have been implemented in Maple, Mathematica, and GAP675. But these results are only the first steps towards computational algebraic analysis, its implementation in computer algebra systems, and its applications to mathematical systems, control theory, signal processing, mathematical physics, etc.
Outside applications which can clearly be seen as transversal acitivies, our development directions are linked at several levels: shared computable objects, computational strategies and transversal research directions.
Sharing basic algebraic objects.
As seen above, is the well-known fact that the elimination theory for functional systems is deeply intertwined with the one for polynomial systems so that, topology in small dimension, applications in control theory, signal theory and robotics share naturally a large set of computable objects developped in our project team.
Performing efficient basic arithmetic operations in number fields is also a key ingredient to most of our algorithms, in Number theory as well as in topology in small dimension or, more generally in the use of roots of polynomials systems. In particular, finding good representations of number fields, lead to the same computational problems as working with roots of polynomial systems by means of triangular systems (towers of number fields) or rational parameterizations (unique number field). Making any progress in one direction will probably have direct consequences for almost all the problems we want to tackle.
Elimination theory is also deeply connected to Gröbner bases and rewriting, which are themselves linked to Garside theory and Koszul duality, establishing a continuum with the effective methods studied in algebraic topology.
Symbolic-numeric strategies.
Several general low-level tools are also shared such as the use of approximate arithmetic to speed up certified computations. Sometimes these can also lead to improvement for a different purpose (for example computations over the rationals, deeply used in geometry can often be performed in parallel combining computations in finite fields together with fast Chinese remaindering and modular evaluations).
As simple example of this sharing of tools and strategies, the use of approximate arithmetic is common to the work on LLL (used in the evaluation of the security of cryptographic systems), resolutions of real-world algebraic systems (used in our applications in robotics, control theory, and signal theory), computations of signs of trigonometric expressions used in knot theory or to certified evaluations of dilogarithm functions on an algebraic variety for the computation of volumes of representations in our work in topology, numerical integration and computations of
Transversal research directions. The study of the topology of complex algebraic curves is central in the computation of periods of algebraic curves (number theory) but also in the study of character varieties (topology in small dimension) as well as in control theory (stability criteria). Very few computational tools exists for that purpose and they mostly convert the problem to the one of variety over the reals (we can then recycle our work in computational geometry).
As for real algebraic curves, finding a way to describe the topology (an equivalent to the graph obtained in the real case) or computing certified drawings (in the case of a complex plane curve, a useful drawing is the so called associated amoeba) are central subjects for Ouragan.
As mentioned in the section 3.3 the computation of the Mahler measure of an algebraic implicit curve is either a challenging problem in number theory and a new direction in topology. The basic formula requires the study of points of moduli 1, as for stability problems in Control Theory (stability problems), and certified numerical evaluations of non algebraic functions at algebraic points as for many computations for
The development of basic computable objects is somehow on demand and depends on all the other directions. However, some critical computations are already known to be bottlenecks and are sources of constant efforts.
Computations with algebraic numbers appear in almost all our activities: when working with number fields in our work in algorithmic number theory as well as in all the computations that involve the use of solutions of zero-dimensional systems of polynomial equations. Among the identified problems: finding good representations for single number fields (optimizing the size and degree of the defining polynomials), finding good representations for towers or products of number fields (typically working with a tower or finding a unique good extension), efficiently computing in practice with number fields (using certified approximation vs working with the formal description based on polynomial arithmetics). Strong efforts are currently done in the understanding of the various strategies by means of tight theoretical complexity studies 76, 121, 56 and many other efforts will be required to find the right representation for the right problem in practice. For example, for isolating critical points of plane algebraic curves, it is still unclear (at least the theoretical complexity cannot help) that an intermediate formal parameterization is more efficient than a triangular decomposition of the system and it is still unclear that these intermediate computations could be dominated in time by the certified final approximation of the roots.
Concerning algorithmic number theory, the main problems we will be considering in the coming years are the following:
Some studies in this area will be driven by some other directions, for example, the rigorous evaluation of non algebraic functions on algebraic varieties might become central for some of our work on topology in small dimension (volumes of varieties, drawing of amoeba) or control theory (approximations of discriminant varieties) are our two main current sources of interesting problems. In the same spirit, the work on
On the other hand, another objective is to extend existing results on periods of algebraic curves to general curves and higher dimensional varieties is a general promising direction. This project aims at providing tools for integration on higher homology groups of algebraic curves, ie computing Gauss-Manin connections. It requires good understanding of their topology, and more algorithmic tools on differential equations.
The brute force approach to computable objects from topology of small dimension will not allow any significant progress. As explained above, the systems that arise from these problems are simply outside the range of doable computations. We still continue the work in this direction by a four-fold approach, with all three directions deeply inter-related. First, we focus on a couple of especially meaningful (for the applications) cases, in particular the 3-dimensional manifold called Whitehead link complement. At this point, we are able to make steps in the computation and describe part of the solutions 86, 98; we hope to be able to complete the computation using every piece of information to simplify the system. Second, we continue the theoretical work to understand more properties of these systems 83. These properties may prove how useful for the mathematical understanding is the resolution of such systems - or at least the extraction of meaningful information. This approach is for example carried on by Falbel and his work on configuration of flags 87, 89. Third, we position ourselves as experts in the know-how of this kind of computations and natural interlocutors for colleagues coming up with a question on such a computable object (see 95 and 98). This also allows us to push forward the kind of computation we actually do and make progress in the direction of the second point. We are credible interlocutors because our team has the blend of theoretical knowledge and computational capabilities that grants effective resolutions of the problems we are presented. And last, we use the knowledge already acquired to pursue our theoretical study of the CR-spherical geometry 75, 88, 84.
Another direction of work is the help to the community in experimental mathematics on new objects. It involves downsizing the system we are looking at (for example by going back to systems coming from hyperbolic geometry and not CR-spherical geometry) and get the most out of what we can compute, by studying new objects. An example of this research direction is the work of Guilloux around the volume function on deformation varieties. This is a real-analytic function defined on the varieties we specialized in computing. Being able to do effective computations with this function led first to a conjecture 97. Then, theoretical discussions around this conjecture led to a paper on a new approach to the Mahler measure of some 2-variables polynomials 96. In turn, this last paper gave a formula for the Mahler measure in terms of a function akin to the volume function applied at points in an algebraic variety whose moduli of coordinates are 1. The OURAGAN team has the expertise to compute all the objects appearing in this formula, opening the way to another area of application. This area is deeply linked with number theory as well as topology of small dimension. It requires all the tools at disposition within OURAGAN.
We will carry on the exhaustive search for the lexicographic degrees for the rational knots. They correspond to trigonal space curves: computations in the braid group
On the other hand, a natural direction would be: given an explicit polynomial space curve, determine the under/over nature of the crossings when projecting, draw it and determine the known knot 16 it is isotopic to.
As mentioned above, the drawing of algebraic curves and surfaces is a critical action in OURAGAN since it is a key ingredient in numerous developments. In some cases, one will need a fully certified study of the variety for deciding existence of solutions (for example a region in a robot's parameter's space with solutions to the DKP above or deciding if some variety crosses the unit polydisk for some stability problems in control-theory), in some other cases just a partial but certified approximation of a surface (path planning in robotics, evaluation of non algebraic functions over an algebraic variety for volumes of knot complements in the study of character varieties).
On the one hand, we will contribute to general tools like ISOTOP 17 under the supervision of the GAMBLE project-team and, on the other hand, we will propose ad-hoc solutions by gluing some of our basic tools (problems of high degrees in robust control theory). The priority is to provide a first software that implements methods that fit as most as possible the very last complexity results we got on several (theoretical) algorithms for the computation of the topology of plane curves.
A particular effort will be devoted to the resolution of overconstraint bivariate systems which are useful for the studies of singular points and to polynomials systems in 3 variables in the same spirit : avoid the use of Gröbner basis and propose a new algorithm with a state-of-the-art complexity and with a good practical behavior.
In parallel, one will have to carefully study the drawing of graphs of non algebraic functions over algebraic complex surfaces for providing several tools which are useful for mathematicians working on topology in small dimension (a well-known example is the drawing of amoebia, a way of representing a complex curve on a sheet of paper).
We want to further develop our expertise in the computational aspects of algebraic analysis by continuing to develop effective versions of results of module theory, homological algebra, category theory and sheaf theory 141 which play important roles in algebraic analysis 51, 109, 110 and in the algorithmic study of linear functional systems. In particular, we shall focus on linear systems of integro-differential-constant/varying/distributed delay equations 127, 130 which play an important role in mathematical systems theory, control theory, and signal processing 127, 136, 132, 133.
The rings of integro-differential operators are highly more complicated than the purely differential case (i.e. Weyl algebras) 15, due to the existence of zero-divisors, or the fact of having a coherent ring instead of a noetherian ring 48. Therefore, we want to develop an algorithmic study of these rings. Following the direction initiated in 130 for the computation of zero divisors (based on the polynomial null spaces of certain operators), we first want to develop algorithms for the computation of left/right kernels and left/right/generalized inverses of matrices with entries in such rings, and to use these results in module theory (e.g. computation of syzygy modules, (shorter/shortest) free resolutions, split short/long exact sequences). Moreover, Stafford's results 142, algorithmically developed in 15 for rings of partial differential operators (i.e. the Weyl algebras), are known to still hold for rings of integro-differential operators. We shall study their algorithmic extensions. Our corresponding implementation will be extended accordingly.
Finally, within a computer algebra viewpoint, we shall continue to algorithmically study issues on rings of integro-differential-delay operators 127, 132 and their applications to the study of equivalences of differential constant/varying/distributed delay systems (e.g. Artstein's reduction, Fiagbedzi-Pearson's transformation) which play an important role in control theory.
The study of the security of asymmetric cryptographic systems comes as an application of the work carried out in algorithmic number theory and revolves around the development and the use of a small number of general purpose algorithms (lattice reduction, class groups in number fields, discrete logarithms in finite fields, ...). For example, the computation of generators of principal ideals of cyclotomic fields can be seen as one of these applications since these are used in a number of recent public key cryptosystems.
The cryptographic community is currently very actively assessing the threat coming for the development of quantum computers. Indeed, such computers would permit tremendous progress on many number theoretic problems such as factoring or discrete logarithm computations and would put the security of current cryptosystem under a major risk. For this reason, there is a large global research effort dedicated to finding alternative methods of securing data. For example, the US standardization agency called NIST has recently launched a standardization process around this issue. In this context, OURAGAN is part of the competition and has submitted a candidate (which has not been selected) 46. This method is based on number-theoretic ideas involving a new presumably difficult problem concerning the Hamming distance of integers modulo large numbers of Mersenne.
Algebraic computations have tremendously been used in Robotics, especially in kinematics, since the last quarter of the 20th century 100. For example, one can find algebraic proofs for the 40 possible solutions to the direct kinematics problem 122 for steward platforms and companion experiments based on Gröbner basis computations 90. On the one hand, hard general kinematics problems involve too many variables for pure algebraic methods to be used in place of existing numerical or semi-numerical methods everywhere and everytime, and on the other hand, global algebraic studies allow to propose exhaustive classifications that cannot be reached by other methods,for some quite large classes.
Robotics is a long-standing collaborative work with LS2N (Laboratory of Numerical Sciences of Nantes). Work has recently focused on the offline study of mechanisms, mostly parallel, their singularities or at least some types of singularities (cuspidals robots 145).
For most parallel or serial manipulators, pose variables and joints variables are linked by algebraic
equations and thus lie an algebraic variety. The two-kinematics problems (the direct kinematics problem - DKP- and the inverse kinematics problem - IKP) consist in studying the preimage of the projection of this algebraic variety onto a subset of unknowns. Solving the DKP remains to computing the possible positions for a given set of joint variables values while solving the IKP remains to computing the possible joints variables values for a given position. Algebraic methods have been deeply used in several situations for studying parallel and serial mechanisms, but finally their use stays quite confidential in the design process. Cylindrical Algebraic Decomposition coupled with variable's eliminations by means of Gröbner based computations can be used to model the workspace, the joint space and the computation of singularities. On the one hand, such methods suffer immediately when increasing the number of parameters or when working with imprecise data. On the other hand, when the problem can be handled, they might provide full and exhaustive classifications.
The tools we use in that context 66, 65, 102, 104, 103 depend mainly on the resolution of parameter-based systems and therefore of study-dependent curves or flat algebraic surfaces (2 or 3 parameters), thus joining our thematic Computational Geometry.
Certain problems studied in mathematical systems theory and control theory can be better understood and finely studied by means of algebraic structures and methods. Hence, the rich interplay between algebra, computer algebra, and control theory has a long history.
For instance, the first main paper on Gröbner bases written by their creators, Buchberger, was published in Bose's book 52 on control theory of multidimensional systems. Moreover, the differential algebra approach to nonlinear control theory (see 79, 78 and the references therein) was a major motivation for the algorithmic study of differential algebra 53, 82. Finally, the behaviour approach to linear systems theory 146, 124 advocates for an algorithmic study of algebraic analysis (see Section 2.1.4). More generally, control theory is porous to computer algebra since one finds algebraic criteria of all kinds in the literature even if the control theory community has a very few knowledge in computer algebra.
OURAGAN has a strong interest in the computer algebra aspects of mathematical systems theory and control theory related to both functional and polynomial systems, particularly in the direction of robust stability analysis and robust stabilization problems for multidimensional systems 52, 124 and infinite-dimensional systems 72 (such as differential time-delay systems).
Let us shortly state a few points of our recent interests in this direction.
In control theory, stability analysis of linear time-invariant control systems is based on the famous Routh-Hurwitz criterion (late 19th century) and its relation with Sturm sequences and Cauchy index. Thus, stability tests were only involving tools for univariate polynomials 108. While extending those tests to multidimensional systems or differential time-delay systems, one had to tackle multivariate problems recursively with respect to the variables 52. Recent works use a mix of symbolic/numeric strategies, Linear Matrix Inequalities (LMI), sums of squares, etc. But still very few practical experiments are currently involving certified algebraic computations based on general solvers for polynomial equations. We have recently started to study certified stability tests for multidimensional systems or differential time-delay systems with an important observation: with a correct modelization, some recent algebraic methods
The structural stability of
The rich interplay between control theory, algebra, and computer algebra is also well illustrated with our recent work on robust stabilization problems for multidimensional and finite/infinite-dimensional systems 54, 129, 134, 137, 135, 136.
Due to numerous applications (e.g. sensor network, mobile robots), sources and sensors localization has intensively been studied in the literature of signal processing. The anchor position self calibration problem is a well-known problem
which consists in estimating the positions of both the moving sources and a set of fixed sensors (anchors) when only the distance information between the points from the different sets is available. The position self-calibration problem is a particular case of the Multidimensional Unfolding (MDU) problem for the Euclidean space of dimension 3. In the signal processing literature, this problem is attacked by means of optimization problems (see 71 and the references therein). Based on computer algebra methods for polynomial systems, we have recently developed a new approach for the MDU problem which yields closed-form solutions and a very efficient algorithm for the estimation of the positions 73 based only on linear algebra techniques. This first result, done in collaboration with Dagher (Inria Chile) and Zheng (DEFROST, Inria Lille), yielded a recent patent 74. This result advocates for the study of other localization problems based on the computational polynomial techniques developed in OURAGAN.
In collaboration with Safran Tech (Barau, Hubert) and Dagher (Inria Chile), a symbolic-numeric study of the new multi-carrier demodulation method99 has recently been initiated. Gear fault diagnosis is an important issue in aeronautics industry since a damage in a gearbox, which is not detected in time, can have dramatic effects on the safety of a plane. Since the vibrations of a spur gear can be modeled as a product of two periodic functions related to the gearbox kinematic,
it is proposed to recover each function from the global signal by means of an optimal reconstruction problem which, based on Fourier analysis, can be rewritten as
Our expertise on algebraic parameter estimation problem, developed in the former Non-A project-team (Inria Lille), will be further developed. Following this work 91, the problem consists in estimating a set Maple prototype NonA. The case of a general structured perturbation is still lacking.
No particular action this year.
Character varieties are studied in the team as an interesting algebraic object. This study is completed by an effort to understand the geometrical meaning of each points in some carefully chosen character varieties. One approach to this problem is the construction of geometric structures.
Another approach is the study of limit sets associated to such points and their deformations when moving in the character variety. Those are fractal objects in the 3-sphere
A recent work of Sipasseuth, Plantard and Susilo proposed to accelerate lattice-based signature verifications and compress public key storage at the cost of a precomputation on a public key. This first approach, which focused on a restricted type of key, did not include most NIST candidates or most lattice representations in general. In 18, we first present a way to improve even further both their verification speed and their public key compression capability by using a generator of numbers that better suit the method needs. We then also generalize their framework to apply to q-ary lattice schemes as well as classical lattices using Hermite Normal Form, improving their security and applicable scope, thus exhibiting potential trade-offs to accelerate lattice-based signature verification in general and compression of the public key on the verifier side for unstructured lattices.
Zero-knowledge proofs are an important tool for many cryptographic protocols and applications.
The threat of a coming quantum computer motivates the research for new zero-knowledge proof techniques for (or based on) post-quantum cryptographic problems.
One of the few directions is code-based cryptography for which the strongest problem is the syndrome decoding (SD) of random linear codes. This problem is known to be NP-hard and the cryptanalysis state of affairs has been stable for many years. A zero-knowledge protocol for this problem was pioneered by Stern in 1993. As a simple public-coin three-round protocol, it can be converted to a post-quantum signature scheme through the famous Fiat-Shamir transform. The main drawback of this protocol is its high soundness error of
The finite field isomorphism (FFI) problem was introduced in PKC'18, as an alternative to average-case lattice problems (like LWE, SIS, or NTRU). As an application, the same paper used the FFI problem to construct a fully homomorphic encryption scheme. In 35, we prove that the decision variant of the FFI problem can be solved in polynomial time for any field characteristics
In24, we study several explicit finite index subgroups in the known complex hyperbolic lattice triangle groups, and show some of them are neat, some of them have positive first Betti number, some of them have a homomorphisms onto a non-Abelian free group. For some lattice triangle groups, we determine the minimal index of a neat subgroup. Finally, we answer a question raised by Stover and describe an infinite tower of neat ball quotients all with a single cusp.
In 25 present a systematic effective method to construct coarse fundamental domains for the action of the Picard modular groups
In 26, we introduce novel mathematical and computational tools to develop a complete algorithm for computing the set of non-properness of polynomials maps in the plane. In particular, this set, which we call the Jelonek set, is a subset of
In 27, we present a method for solving two minimal problems for relative camera pose estimation from three views, which are based on three view correspondences of ( i ) three points and one line and the novel case of ( ii ) three points and two lines through two of the points. These problems are too difficult to be efficiently solved by the state of the art Gröbner basis methods. Our method is based on a new efficient homotopy continuation (HC) solver framework MINUS, which dramatically speeds up previous HC solving by specializing hc methods to generic cases of our problems. We characterize their number of solutions and show with simulated experiments that our solvers are numerically robust and stable under image noise, a key contribution given the borderline intractable degree of nonlinearity of trinocular constraints. We show in real experiments that ( i ) sift feature location and orientation provide good enough point-and-line correspondences for three-view reconstruction and ( ii ) that we can solve difficult cases with too few or too noisy tentative matches, where the state of the art structure from motion initialization fails.
In 21, we consider the algebraic parameter estimation problem for a class of standard perturbations. We assume
that the measurement
In 34, we expose some effective aspects of the algebra of linear ordinary integro-differential operators with polynomial coefficients. More precisely, we prove that the annihilator of an evaluation operator is a finitely generated ideal which can be explicitly characterized and computed. This is an advance towards the development of an effective elimination theory for ordinary integro-differential operators and an effective study of linear systems of integro-differential equations with polynomial coefficients.
In 22, the authors show how to construct coherent presentations (presentations by generators, relations and relations among relations) of monoids admitting a right-noetherian Garside family. Thereby, it resolves the question of finding a unifying generalisation of the following two distinct extensions of construction of coherent presentations for spherical Artin-Tits monoids: to general Artin-Tits monoids, and to Garside monoids. The result is applied to some monoids which are neither Artin-Tits nor Garside.
Metabolic networks and their reconstruction set a new era in the analysis of metabolic and growth functions in the various organisms. By modeling the reactions occurring inside an organism, metabolic networks provide the means to understand the underlying mechanisms that govern biological systems. Constraint-based approaches have been widely used for the analysis of such models and led to intriguing geometry-oriented challenges. In this setting, sampling uniformly points from polytopes derived from metabolic models (flux sampling) provides a representation of the solution space of the model under various conditions. However, the polytopes that result from such models are of high dimension (in the order of thousands) and usually considerably skinny. Therefore, to sample uniformly at random from such polytopes shouts for a novel algorithmic and computational framework specially tailored for the properties of metabolic models. In 19, we present a complete software framework to handle sampling in metabolic networks. Its backbone is a Multiphase Monte Carlo Sampling (MMCS) algorithm that unifies rounding and sampling in one pass, yielding both upon termination. It exploits an optimized variant of the Billiard Walk that enjoys faster arithmetic complexity per step than the original. We demonstrate the efficiency of our approach by performing extensive experiments on various metabolic networks. Notably, sampling on the most complicated human metabolic network accessible today, Recon3D, corresponding to a polytope of dimension 5335, took less than 30 hours. To the best of our knowledge, that is out of reach for existing software.
In 20, we introduce Reflective Hamiltonian Monte Carlo (ReHMC), an HMC-based algorithm to sample from a log-concave distribution restricted to a convex body. The random walk is based on incorporating reflections to the Hamiltonian dynamics such that the support of the target density is the convex body. We develop an efficient open source implementation of ReHMC and perform an experimental study on various high-dimensional datasets. The experiments suggest that ReHMC outperforms Hit-and-Run and Coordinate-Hit-and-Run regarding the time it needs to produce an independent sample, introducing practical truncated sampling in thousands of dimensions.
In 29, wee present a probabilistic algorithm to test if a homogeneous polynomial ideal
In 33, we propose novel randomized geometric tools to detect low-volatility anomalies in stock markets; a principal problem in financial economics. Our modeling of the (detection) problem results in sampling and estimating the (relative) volume of geodesically non-convex and non-connected spherical patches that arise by intersecting a non-standard simplex with a sphere. To sample, we introduce two novel Markov Chain Monte Carlo (MCMC) algorithms that exploit the geometry of the problem and employ state-of-the-art continuous geometric random walks (such as Billiard walk and Hit-and-Run) adapted on spherical patches. To our knowledge, this is the first geometric formulation and MCMC-based analysis of the volatility puzzle in stock markets. We have implemented our algorithms in C++ (along with an R interface) and we illustrate the power of our approach by performing extensive experiments on real data. Our analyses provide accurate detection and new insights into the distribution of portfolios’ performance characteristics. Moreover, we use our tools to show that classical methods for low-volatility anomaly detection in finance form bad proxies that could lead to misleading or inaccurate results.
In 36, we address univariate root isolation when the polynomial's coefficients are in a multiple field extension. We consider a polynomial
In 31 we aims to study a specific kind of parallel robot: Spherical Parallel Manipulators (SPM) that are capable of unlimited rolling. A focus is made on the kinematics of such mechanisms, especially taking into account uncertainties (e.g. on conception & fabrication parameters, measures) and their propagations. Such considerations are crucial if we want to control our robot correctly without any undesirable behavior in its workspace (e.g. effects of singularities). In this paper, we will consider two different approaches to study the kinematics and the singularities of the robot of interest: symbolic and semi-numerical. By doing so, we can compute a singularity-free zone in the work- and joint spaces, considering given uncertainties on the parameters. In this zone, we can use any control law to inertially stabilize the upper platform of the robot.
The objective of our Agrement with WATERLOO MAPLE INC. is to promote software developments to which we actively contribute.
On the one hand, WMI provides manpower, software licenses, technical support (development, documentation and testing) for an inclusion of our developments in their commercial products. On the other hand, OURAGAN offers perpetual licenses for the use of the concerned source code.
As past results of this agreement one can cite our C-Library RS for the computations of the real solutions zero-dimensional systems or also our collaborative development around the Maple package DV for solving parametric systems of equations.
For this term, the agreement covers algorithms developed in areas including but not limited to: 1) solving of systems of polynomial equations, 2) validated numerical polynomial root finding, 3) computational geometry, 4) curves and surfaces topology, 5) parametric algebraic systems, 6) cylindrical algebraic decompositions, 7) robotics applications.
In particular, it covers our collaborative work with some of our partners, especially the Gamble Project-Team - Inria Nancy Grand Est.
ANR JCJC GALOP (Games through the lens of ALgebra and OPptimization)
Coordinator: Elias Tsigaridas
Duration: 2018 – 2023
GALOP is a Young Researchers (JCJC) project with the purpose of extending the limits of the state- of-the-art algebraic tools in computer science, especially in stochastic games. It brings original and innovative algebraic tools, based on symbolic-numeric computing, that exploit the geometry and the structure and complement the state-of-the-art. We support our theoretical tools with a highly efficient open-source software for solving polynomials. Using our algebraic tools we study the geometry of the central curve of (semi-definite) optimization problems. The algebraic tools and our results from the geometry of optimization pave the way to introduce algorithms and precise bounds for stochastic games.
ANR JCJC SHoCoS (Structure and Homotopy of Configuration Spaces)
Coordinator: Najib Idrissi (Univ. Paris Cité, IMJ-PRG)
Participant: Yves Guiraud
Duration: 2022 – 2026
This is a project of fundamental research in mathematics, specifically algebraic topology, homotopical algebra, and quantum algebra. It is concerned with configuration spaces, which consist in finite sequences of pairwise distinct points in a manifold. Over the past couple of decades, strides have been made in the study and computation of the homotopy types of configuration spaces, i.e., their shape up to continuous deformation. These advances were possible thanks to the rich structure of configuration spaces, which comes from the theory of operads. Moreover, a new theory, factorization homology, allowed the use of configuration spaces to compute topological field theories, topological invariants of manifolds inspired by physics. Our purpose is to exploit the full operadic structure of configuration spaces to obtain new kinds of stabilizations in the homotopy types of configuration spaces, and to use this stability to effectively compute topological field theories from deformation quantization.
LOCUS (non‐Linear geOmetriC compUting at Scale) Inria Exploratory Action
Coordinator: Elias Tsigaridas
Duration 2022 - 2025
Summary : LOCUS shapes a novel theoretical, algorithmic, and computational framework at the intersection of computational algebra, high dimensional geometric and statistical computing, and optimization. It focuses on sampling and integrating in convex bodies, algorithms for convex optimization, and applications in structural biology. It aims to deliver effective theoretical algorithms and efficient open source software for the problems of interest.
Réal (Réécriture algébrique) Inria Exploratory Action
Coordinator : Yves Guiraud
Duration : 2022-2025
Summary : Rewriting is a branch of computer algebra consisting in transforming mathematical expressions according to admissible rules. Examples range from elementary situations, such as a remarkable identity
The Réal project proposes to explore the connections between rewriting and algebra. The aim is to understand the algebraic foundations of rewriting, to integrate similar calculation mechanisms known in algebra, and to develop new calculation tools with a view to applications in three areas of mathematics: combinatorial and higher algebra, theory groups and representations, study of algebraic systems and varieties.