Large distributed infrastructures are rampant in our society. Numerical simulations form the basis of computational sciences and high performance computing infrastructures have become scientific instruments with similar roles as those of test tubes or telescopes. Cloud infrastructures are used by companies in such an intense way that even the shortest outage quickly incurs the loss of several millions of dollars. But every citizen also relies on (and interacts with) such infrastructures via complex wireless mobile embedded devices whose nature is constantly evolving. In this way, the advent of digital miniaturization and interconnection has enabled our homes, power stations, cars and bikes to evolve into smart grids and smart transportation systems that should be optimized to fulfill societal expectations.
Our dependence and intense usage of such gigantic systems obviously leads to very high expectations in terms of performance. Indeed, we strive for low-cost and energy-efficient systems that seamlessly adapt to changing environments that can only be accessed through uncertain measurements. Such digital systems also have to take into account both the users' profile and expectations to efficiently and fairly share resources in an online way. Analyzing, designing and provisioning such systems has thus become a real challenge.
Such systems are characterized by their
ever-growing size,
intrinsic heterogeneity and distributedness,
user-driven requirements,
and an unpredictable variability that renders them essentially stochastic.
In such contexts, many of the former design and analysis
hypotheses (homogeneity, limited hierarchy, omniscient view,
optimization carried out by a single entity, open-loop
optimization, user outside of the picture) have become obsolete, which
calls for radically new approaches. Properly studying such systems
requires a drastic rethinking of fundamental aspects regarding the system's
observation (measure, trace, methodology, design of experiments),
analysis (modeling, simulation, trace analysis and visualization),
and optimization (distributed, online, stochastic).
The goal of the POLARIS project is to contribute to the understanding of the performance of very large scale
distributed systems by applying ideas from diverse research fields and application domains.
We believe that studying all these different aspects at once without restricting to specific systems is the key to push forward our understanding of such challenges and to propose innovative solutions.
This is why we intend to investigate problems arising from application
domains as varied as large computing systems, wireless networks, smart
grids and transportation systems.
The members of the POLARIS project cover a very wide spectrum of expertise in performance evaluation and models, distributed optimization, and analysis of HPC middleware. Specifically, POLARIS' members have worked extensively on:
AI and Learning is everywhere now. Let us clarify how our research activities are positioned with respect to this trend.
A first line of research in POLARIS is devoted to the use of statistical learning techniques (Bayesian inference) to model the expected performance of distributed systems, to build aggregated performance views, to feed simulators of such systems, or to detect anomalous behaviours.
In a distributed context it is also essential to design systems that can seamlessly adapt to the workload and to the evolving behaviour of its components (users, resources, network). Obtaining faithful information on the dynamic of the system can be particularly difficult, which is why it is generally more efficient to design systems that dynamically learn the best actions to play through trial and errors. A key characteristic of the work in the POLARIS project is to leverage regularly game-theoretic modeling to handle situations where the resources or the decision is distributed among several agents or even situations where a centralised decision maker has to adapt to strategic users.
An important research direction in POLARIS is thus centered on reinforcement learning (Multi-armed bandits, Q-learning, online learning) and active learning in environments with one or several of the following features:
As a side effect, many of the gained insights can often be used to dramatically improve the scalability and the performance of the implementation of more standard machine or deep learning techniques over supercomputers.
The POLARIS members are thus particularly interested in the design and analysis of adaptive learning algorithms for multi-agent systems, i.e. agents that seek to progressively improve their performance on a specific task. The resulting algorithms should not only learn an efficient (Nash) equilibrium but they should also be capable of doing so quickly (low regret), even when facing the difficulties associated to a distributed context (lack of coordination, uncertain world, information delay, limited feedback, …)
In the rest of this document, we describe in detail our new results in the above areas.
Evaluating the scalability, robustness, energy consumption and performance of large infrastructures such as exascale platforms and clouds raises severe methodological challenges. The complexity of such platforms mandates empirical evaluation but direct experimentation via an application deployment on a real-world testbed is often limited by the few platforms available at hand and is even sometimes impossible (cost, access, early stages of the infrastructure design, etc.). Furthermore, such experiments are costly, difficult to control and therefore difficult to reproduce. Although many of these digital systems have been built by human, they have reached such a complexity level that we are no longer able to study them like artificial systems and have to deal with the same kind of experimental issues as natural sciences. The development of a sound experimental methodology for the evaluation of resource management solutions is among the most important ways to cope with the growing complexity of computing environments. Although computing environments come with their own specific challenges, we believe such general observation problems should be addressed by borrowing good practices and techniques developed in many other domains of science, in particular (1) Predictive Simulation, (2) Trace Analysis and Visualization, and (3) the Design of Experiments.
Large computing systems are particularly complex to understand because of the interplay between their discrete nature (originating from deterministic computer programs) and their stochastic nature (emerging from the physical world, long distance interactions, and complex hardware and software stacks). A first line of research in POLARIS is devoted to the design of relatively simple statistical models of key components of distributed systems and their exploitation to feed simulators of such systems, to build aggregated performance views, and to detect anomalous behaviors.
Unlike direct experimentation via an application deployment on a real-world testbed, simulation enables fully repeatable and configurable experiments that can often be conducted quickly for arbitrary hypothetical scenarios. In spite of these promises, current simulation practice is often not conducive to obtaining scientifically sound results. To date, most simulation results in the parallel and distributed computing literature are obtained with simulators that are ad hoc, unavailable, undocumented, and/or no longer maintained. As a result, most published simulation results build on throw-away (short-lived and non validated) simulators that are specifically designed for a particular study, which prevents other researchers from building upon it. There is thus a strong need for recognized simulation frameworks by which simulation results can be reproduced, further analyzed and improved.
Many simulators of MPI applications have been developed by renowned HPC groups (e.g., at SDSC 105, BSC 39, UIUC 114, Sandia Nat. Lab. 112, ORNL 40 or ETH Zürich 74) but most of them build on restrictive network and application modeling assumptions that generally prevent to faithfully predict execution times, which limits the use of simulation to indication of gross trends at best.
The SimGrid simulation toolkit, whose development started more than 20 years ago in UCSD, is a renowned project which gathers more than 1,700 citations and has supported the research of at least 550 articles. The most important contribution of POLARIS to this project in the last years has been to improve the quality of SimGrid to the point where it can be used effectively on a daily basis by practitioners to accurately reproduce the dynamic of real HPC systems.
In particular, SMPI 48, a simulator based on SimGrid that simulates unmodified MPI applications written in C/C++ or FORTRAN, has now become a very unique tool allowing to faithfully study particularly complex scenario such as legacy Geophysics application that suffers from spatial and temporal load balancing problem 78, 77 or the HPL benchmark 46, 47. We have shown that the performance (both for time and energy consumption 73) predicted through our simulations was systematically within a few percents of real experiments, which allows to reliably tune the applications at very low cost. This capacity has also been leveraged to study (through StarPU-SimGrid) complex and modern task-based applications running on heterogeneous sets of hybrid (CPUs + GPUs) nodes 92. The phenomenon studied through this approach would be particularly difficult to study through real experiments but yet allow to address
real problems of these applications. Finally, SimGrid is also heavily used through BatSim, a batch simulator developed in the DATAMOVE team and which leverages SimGrid, to investigate the performance of machine learning strategies in a batch scheduling context 81, 115.
Many monolithic visualization tools have been developed by renowned HPC groups since decades (e.g., BSC 96, Jülich and TU Dresden 91, 42, UIUC 72, 100, 76 and ANL 113) but most of these tools build on the classical information visualization 102 that consists in always first presenting an overview of the data, possibly by plotting everything if computing power allows, and then to allow users to zoom and filter, providing details on demand. However in our context, the amount of data comprised in such traces is several orders of magnitude larger than the number of pixels on a screen and displaying even a small fraction of the trace leads to harmful visualization artifacts. Such traces are typically made of events that occur at very different time and space scales and originate from different sources, which hinders classical approaches, especially when the application structure departs from classical MPI programs with a BSP/SPMD structure. In particular, modern HPC applications that build on a task-based runtime and run on hybrid nodes are particularly challenging to analyze. Indeed, the underlying task-graph is dynamically scheduled to avoid spurious synchronizations, which prevents classical visualizations to exploit and reveal the application structure.
In 56, we explain how modern data analytics tools can be used to build, from heterogeneous information sources, custom, reproducible and insightful visualizations of task-based HPC applications at a very low development cost in the StarVZ framework.
By specifying and validating statistical models of the performance of HPC applications/systems, we manage to identify when their behavior departs from what is expected and detect performance anomalies. This approach has first been applied to state-of-the art linear algebra libraries in 56 and more recently to a sparse direct solver 89. In both cases, we have been able to identify and fix several non-trivial anomalies that had not been noticed even by the application and runtime developers.
Finally, these models not only allow to reveal when applications depart from what is expected but also to summarize the execution by focusing on the most important features, which is particularly useful when comparing two executions.
Part of our work is devoted to the control of experiments on both classical (HPC) and novel (IoT/Fog in a smart home context) infrastructures. To this end, we heavily rely on experimental testbeds
such as Grid5000 and FIT-IoTLab that can be well-controlled but real
experiments are nonetheless quite resource-consuming. Design of experiments has been successfully applied in many fields (e.g., agriculture, chemistry, industrial processes) where experiments are considered expensive. Building on concrete use cases, we explore how Design of Experiments and Reproducible Research techniques can be used to (1) design transparent auto-tuning strategies of scientific computation kernels 41, 101 (2) set up systematic performance non regression tests on Grid5000 (450 nodes for 1.5 year) and detect many abnormal events (related to bios and system upgrades, cooling, faulty memory and power instability) that had a significant effect on the nodes, from subtle performance changes of 1% to much more severe degradation of more than 10%, and had yet been unnoticed by both Grid’5000 technical team and Grid’5000 users (3) design and evaluate the performance of service provisioning strategies 50, 49 in Fog infrastructures.
Stochastic models often suffer from the curse of dimensionality: their complexity grows exponentially with the number of dimensions of the system. At the same time, very large stochastic systems are sometimes easier to analyze: it can be shown that some classes of stochastic systems simplify as their dimension goes to infinity because of averaging effects such as the law of large numbers, or the central limit theorem. This forms the basis of what is called an asymptotic method, which consists in studying what happens when a system gets large in order to build an approximation that is easier to study or to simulate.
Within the team, the research that we conduct in this axis is to foster the applicability of these asymptotic methods to new application areas. This leads us to work on the application of classical methods to new problems, but also to develop new approximation methods that take into account special features of the systems we study (i.e., moderate number of dimensions, transient behavior, random matrices). Typical applications are mean field method for performance evaluation, application to distributed optimization, and more recently statistical learning. One originality of our work is to quantify precisely what is the error made by such approximations. This allows us to define refinement terms that lead to more accurate approximations.
Mean field approximation is a well-known technique in statistical physics, that was originally introduced to study systems composed of a very large number of particles (say mean field). Nowadays, variants of this technique are widely applied in many domains: in game theory for instance (with the example of mean field games), but also to quantify the performance of distributed algorithms. Mean field approximation is often justified by showing that a system of
In 58, we give a partial answer to this question. We show that, for most of the mean field models used for performance evaluation, the error made when using a mean field approximation is a exact rate of accuracy. This result came from the use of Stein's method that allows one to quantify precisely the distance between two stochastic processes. Subsequently, in 61, we show that the constant in the
Mean field approximation is widely used in the performance evaluation community to analyze and design distributed control algorithms. Our contribution in this domain has covered mainly two applications: cache replacement algorithms and load balancing algorithms.
Cache replacement algorithms are widely used in content delivery networks. In 44, 65, 64, we show how mean field and refined mean field approximation can be used to evaluate the performance of list-based cache replacement algorithms. In particular, we show that such policies can outperform the classically used LRU algorithm. A methodological contribution of our work is that, when evaluating precisely the behavior of such a policy, the refined mean field approximation is both faster and more accurate than what could be obtained with a stochastic simulator.
Computing resources are often spread across many machines. An efficient use of such resources requires the design of a good load balancing strategy, to distribute the load among the available machines. In 37, 38, 36, we study two paradigms that we use to design asymptotically optimal load balancing policies where a central broker sends tasks to a set of parallel servers. We show in 37, 36 that combining the classical round-robin allocation plus an evaluation of the tasks sizes can yield a policy that has a zero delay in the large system limit. This policy is interesting because the broker does not need any feedback from the servers. At the same time, this policy needs to estimate or know job durations, which is not always possible. A different approach is used in 38 where we consider a policy that does not need to estimate job durations but that uses some feedback from the servers plus a memory of where jobs where send. We show that this paradigm can also be used to design zero-delay load balancing policies as the system size grows to infinity.
Various notions of mean field games have been introduced in the years 2000-2010 in theoretical economics, engineering or game theory. A mean field game is a game in which an individual tries to maximize its utility while evolving in a population of other individuals whose behavior are not directly affected by the individual. An equilibrium is a population dynamics for which a selfish individual would behave as the population. In 52, we develop the notion of discrete space mean field games, that is more amenable to study than the previously introduced notions of mean field games. This leads to two interesting contributions: mean field games are not always the limits of stochastic games as the number of players grow 51, mean field games can be used to study how much vaccination should be subsidized to encourage people to adapt a socially optimal behaviour 66.
Online learning concerns the study of
repeated decision-making in changing environments.
Of course, depending on the context, the words “learning” and “decision-making” may refer to very different things:
in economics, this could mean predicting how rational agents react to market drifts;
in data networks, it could mean adapting the way packets are routed based on changing traffic conditions;
in machine learning and AI applications, it could mean training a neural network or the guidance system of a self-driving car;
etc.
In particular, the changes in the learner's environment could be
either exogenous (that is, independent of the learner's decisions, such as the weather affecting the time of travel),
or endogenous (i.e., they could depend on the learner's decisions, as in a game of poker),
or any combination thereof.
However, the goal for the learner(s) is always the same:
to make more informed decisions that lead to better rewards over time.
The study of online learning models and algorithms dates back to the seminal work of Robbins, Nash and Bellman in the 50's, and it has since given rise to a vigorous research field at the interface of game theory, control and optimization, with numerous applications in operations research, machine learning, and data science. In this general context, our team focuses on the asymptotic behavior of online learning and optimization algorithms, both single- and multi-agent: whether they converge, at what speed, and/or what type of non-stationary, off-equilibrium behaviors may arise when they do not.
The focus of POLARIS on game-theoretic and Markovian models of learning covers a set of specific challenges that dovetail in a highly synergistic manner with the work of other learning-oriented teams within Inria (like SCOOL in Lille, SIERRA in Paris, and THOTH in Grenoble), and it is an important component of Inria's activities and contributions in the field (which includes major industrial stakeholders like Google / DeepMind, Facebook, Microsoft, Amazon, and many others).
Our team's work on online learning covers both single- and multi-agent models; in the sequel, we present some highlights of our work structured along these basic axes.
In the single-agent setting, an important problem in the theory of Markov decision processes – i.e., discrete-time control processes with decision-dependent randomness – is the so-called “restless bandit” problem. Here, the learner chooses an action – or “arm” – from a finite set, and the mechanism determining the action's reward changes depending on whether the action was chosen or not (in contrast to standard Markov problems where the activation of an arm does not have this effect). In this general setting, Whittle conjectured – and Weber and Weiss proved – that Whittle's eponymous index policy is asymptotically optimal. However, the result of Weber and Weiss is purely asymptotic, and the rate of this convergence remained elusive for several decades. This gap was finally settled in a series of POLARIS papers 67, where we showed that Whittle indices (as well as other index policies) become optimal at a geometric rate under the same technical conditions used by Weber and Weiss to prove Whittle's conjecture, plus a technical requirement on the non-singularity of the fixed point of the mean-field dynamics. We also propose the first sub-cubic algorithm to compute Whittle and Gittins indexes. As for reinforcement learning in Markovian bandits, we have shown that Bayesian and optimistic approaches do not use the structure of Markovian bandits similarly: While Bayesian learning has both a regret and a computational complexity that scales linearly with the number of arms, optimistic approaches all incur an exponential computation time, at least in their current versions 59.
In the multi-agent setting, our work has focused on the following fundamental question:
Does the concurrent use of (possibly optimal) single-agent learning algorithms
ensure convergence to Nash equilibrium in multi-agent, game-theoretic environments?
Conventional wisdom might suggest a positive answer to this question because of the following “folk theorem”:
under no-regret learning, the agents' empirical frequency of play converges to the game's set of coarse correlated equilibria.
However, the actual implications of this result are quite weak:
First, it concerns the empirical frequency of play and not the day-to-day sequence of actions employed by the players.
Second, it concerns coarse correlated equilibria which may be supported on strictly dominated strategies – and are thus unacceptable in terms of rationalizability.
These realizations prompted us to make a clean break with conventional wisdom on this topic,
ultimately showing that the answer to the above question is, in general, “no”:
specifically, 86, 84 showed that the (optimal) class of “follow-the-regularized-leader” (FTRL) learning algorithms leads to Poincaré recurrence even in simple,
This negative result generated significant interest in the literature as it contributed in shifting the focus towards identifying which Nash equilibria may arise as stable limit points of FTRL algorithms and dynamics.
Earlier work by POLARIS on the topic 43, 87, 88 suggested that strict Nash equilibria
play an important role in this question.
This suspicion was recently confirmed in a series of papers 55, 71 where we established a sweeping negative result to the effect that mixed Nash equilibria are incompatible with no-regret learning.
Specifically, we showed that any Nash equilibrium which is not strict cannot be stable and attracting under the dynamics of FTRL, especially in the presence of randomness and uncertainty.
This result has significant implications for predicting the outcome of a multi-agent learning process because, combined with 87, it establishes the following far-reaching equivalence:
a state is asymptotically stable under no-regret learning if and only if it is a strict Nash equilibrium.
Going beyond finite games, this further raised the question of what type of non-convergent behaviors can be observed in continuous games – such as the class of stochastic min-max problems that are typically associated to generative adversarial networks (GANs) in machine learning. This question was one of our primary collaboration axes with EPFL, and led to a joint research project focused on the characterization of the convergence properties of zeroth-, first-, and (scalable) second-order methods in non-convex/non-concave problems. In particular, we showed in 75 that these state-of-the-art min-max optimization algorithms may converge with arbitrarily high probability to attractors that are in no way min-max optimal or even stationary – and, in fact, may not even contain a single stationary point (let alone a Nash equilibrium). Spurious convergence phenomena of this type can arise even in two-dimensional problems, a fact which corroborates the empirical evidence surrounding the formidable difficulty of training GANs.
The topics in this axis emerge from current social and economic questions rather than from a fixed set of mathematical methods. To this end we have identified large trends such as energy efficiency, fairness, privacy, and the growing number of new market places. In addition, COVID has posed new questions that opened new paths of research with strong links to policy making.
Throughout these works, the focus of the team is on modeling aspects of the aforementioned problems, and obtaining strong theoretical results that can give high-level guidelines on the design of markets or of decision-making procedures. Where relevant, we complement those works by measurement studies and audits of existing systems that allow identifying key issues. As this work is driven by topics, rather than methods, it allows for a wide range of collaborations, including with enterprises (e.g., Naverlabs), policy makers, and academics from various fields (economics, policy, epidemiology, etc.).
Other teams at Inria cover some of the societal challenges listed here (e.g., PRIVATICS, COMETE) but rather in isolation. The specificity of POLARIS resides in the breadth of societal topics covered and of the collaborations with non-CS researchers and non-research bodies; as well as in the application of methods such as game theory to those topics.
As algorithmic decision-making became increasingly omnipresent in our daily lives (in domains ranging from credits to advertising, hiring, or medicine); it also became increasingly apparent that the outcome of algorithms can be discriminatory for various reasons. Since 2016, the scientific community working on the problem of algorithmic fairness has been exponentially increasing. In this context, in the early days, we worked on better understanding the extent of the problem through measurement in the case of social networks 104. In particular, in this work, we showed that in advertising platforms, discrimination can occur from multiple different internal processes that cannot be controlled, and we advocate for measuring discrimination on the outcome directly. Then we worked on proposing solutions to guarantee fair representation in online public recommendations (aka trending topics on Twitter) 45. This is an example of an application in which it was observed that recommendations are typically biased towards some demographic groups. In this work, our proposed solution draws an analogy between recommendation and voting and builds on existing works on fair representation in voting. Finally, in most recent times, we worked on better understanding the sources of discrimination, in the particular simple case of selection problems, and the consequences of fixing it. While most works attribute discrimination to implicit bias of the decision maker 80, we identified a fundamentally different source of discrimination: Even in the absence of implicit bias in a decision maker’s estimate of candidates’ quality, the estimates may differ between the different groups in their variance—that is, the decision maker’s ability to precisely estimate a candidate’s quality may depend on the candidate’s group 54. We show that this differential variance leads to discrimination for two reasonable baseline decision makers (group-oblivious and Bayesian optimal). Then we analyze the consequence on the selection utility of imposing fairness mechanisms such as demographic parity or its generalization; in particular we identify some cases for which imposing fairness can improve utility. In 53, we also study similar questions in the two-stage setting, and derive the optimal selector and the “price of local fairness’’ one pays in utility by imposing that the interim stage be fair.
Online services in general, and social networks in particular, collect massive amounts of data about their users (both online and offline). It is critical that (i) the users’ data is protected so that it cannot leak and (ii) users can know what data the service has about them and understand how it is used—this is the transparency requirement. In this context, we did two kinds of work. First, we studied social networks through measurement, in particular using the use case of Facebook. We showed that their advertising platform, through the PII1-based targeting option, allowed attackers to discover some personal data of users 106. We also proposed an alternative design—valid for any system that proposed PII-based targeting—and proved that it fixes the problem. We then audited the transparency mechanisms of the Facebook ad platform, specifically the “Ad Preferences’’ page that shows what interests the platform inferred about a user, and the “Why am I seeing this’’ button that gives some reasons why the user saw a particular ad. In both cases, we laid the foundation for defining the quality of explanations and we showed that the explanations given were lacking key desirable properties (they were incomplete and misleading, they have since been changed) 35. A follow-up work shed further light on the typical uses of the platform 34. In another work, we proposed an innovative protocol based on randomized withdrawal to protect public posts deletion privacy 90. Finally, in 62, we study an alternative data sharing ecosystem where users can choose the precision of the data they give. We model it as a game and show that, if users are motivated to reveal data by a public good component of the outcome’s precision, then certain basic statistical properties (the optimality of generalized least squares in particular) no longer hold.
Market design operates at the intersection of computer science and economics and has become increasingly important as many markets are redesigned on digital platforms. Studying markets for commodities, in an ongoing project we evaluate how different fee models alter strategic incentives for both buyers and sellers. We identify two general classes of fees: for one, strategic manipulation becomes infeasible as the market grows large and agents therefore have no incentive to misreport their true valuation. On the other hand, strategic manipulation is possible and we show that in this case agents aim to maximally shade their bids. This has immediate implications for the design of such markets. By contrast, 85 considers a matching market where buyers and sellers have heterogeneous preferences over each other. Traders arrive at random to the market and the market maker, having limited information, aims to optimize when to open the market for a clearing event to take place. There is a tradeoff between thickening the market (to achieve better matches) and matching quickly (to reduce waiting time of traders in the market). The tradeoff is made explicit for a wide range of underlying preferences. These works are adding to an ongoing effort to better understand and design markets 9782.
The COVID-19 pandemic has put humanity to one of the defining challenges of its generation and as such naturally trans-disciplinary efforts have been necessary to support decision making. In a series of articles 9995 we proposed Green Zoning. `Green zones’–areas where the virus is under control based on a uniform set of conditions–can progressively return to normal economic and social activity levels, and mobility between them is permitted. By contrast, stricter public health measures are in place in ‘red zones’, and mobility between red and green zones is restricted. France and Spain were among the first countries to introduce green zoning in April 2020. The initial success of this proposal opened up the way to a large amount of follow-up work analyzing and proposing various tools to effectively deploy different tools to combat the pandemic (e.g., focus-mass testing 98 and a vaccination policy 93). In a joint work with a group of leading economists, public health researchers and sociologists it was found that countries that opted to aim to eliminate the virus fared better not only for public health, but also for the economy and civil liberties 94. Overall this work has been characterized by close interactions with policy makers in France, Spain and the European Commission as well as substantial activity in public discourse (via TV, newspapers and radio).
Our work on energy efficiency spanned multiple different areas and applications such as embedded systems and smart grids. Minimizing the energy consumption of embedded systems with real-time constraints is becoming more important for ecological as well as practical reasons since batteries are becoming standard power supplies. Dynamically changing the speed of the processor is the most common and efficient way to reduce energy consumption 103. In fact, this is the reason why modern processors are equipped with Dynamic Voltage and Frequency Scaling (DVFS) technology 111. In a stochastic environment, with random job sizes and arrival times, combining hard deadlines and energy minimization via DVFS-based techniques is difficult because forcing hard deadlines requires considering the worst cases, hardly compatible with random dynamics. Nevertheless, progress have been made to solve these types of problems in a series of papers using constrained Markov decision processes, both on the theoretical side (proving existence of optimal policies and showing their structure 69, 67, 68) as well as on the experimental side (showing the gains of optimal policies over classical solutions 70).
In the context of a collaboration with Enedis and Schneider Electric (via the Smart Grid chair of Grenoble-INP), we also study the problem of using smart meters to optimize the behavior of electrical distribution networks. We made three kinds of contributions on this subject: (1) how to design efficient control strategies in such a system 107, 109, 108, (2) how to co-simulate an electrical network and a communication network 79, and (3) what is the performance of the communication protocol (PLC G3) used by the Linky smart meters 83.
Supercomputers typically comprise thousands to millions of multi-core
CPUs with GPU accelerators interconnected by complex interconnection
networks that are typically structured as an intricate hierarchy of
network switches. Capacity planning and management of such systems not
only raises challenges in term of computing efficiency but also in
term of energy consumption. Most legacy (SPMD) applications struggle
to benefit from such infrastructure since the slightest failure or
load imbalance immediately causes the whole program to stop or at best
to waste resources. To scale and handle the stochastic nature of
resources, these applications have to rely on dynamic runtimes that
schedule computations and communications in an opportunistic way. Such
evolution raises challenges not only in terms of programming but also
in terms of observation (complexity and dynamicity prevents experiment
reproducibility, intrusiveness hinders large scale data collection,
...) and analysis (dynamic and flexible application structures make
classical visualization and simulation techniques totally ineffective
and require to build on ad hoc information on the application
structure).
Considerable interest has arisen from the seminal prediction that the use of multiple-input, multiple-output (MIMO) technologies can lead to substantial gains in information throughput in wireless communications, especially when used at a massive level. In particular, by employing multiple inexpensive service antennas, it is possible to exploit spatial multiplexing in the transmission and reception of radio signals, the only physical limit being the number of antennas that can be deployed on a portable device. As a result, the wireless medium can accommodate greater volumes of data traffic without requiring the reallocation (and subsequent re-regulation) of additional frequency bands. In this context, throughput maximization in the presence of interference by neighboring transmitters leads to games with convex action sets (covariance matrices with trace constraints) and individually concave utility functions (each user's Shannon throughput); developing efficient and distributed optimization protocols for such systems is one of the core objectives of the research theme presented in Section 3.3.
Another major challenge that occurs here is due to the fact that the efficient physical layer optimization of wireless networks relies on perfect (or close to perfect) channel state information (CSI), on both the uplink and the downlink. Due to the vastly increased computational overhead of this feedback – especially in decentralized, small-cell environments – the continued transition to fifth generation (5G) wireless networks is expected to go hand-in-hand with distributed learning and optimization methods that can operate reliably in feedback-starved environments. Accordingly, one of POLARIS' application-driven goals will be to leverage the algorithmic output of Theme 5 into a highly adaptive resource allocation framework for next-géneration wireless systems that can effectively "learn in the dark", without requiring crippling amounts of feedback.
Smart urban transport systems and smart grids are two examples of collective adaptive systems. They consist of a large number of heterogeneous entities with decentralised control and varying degrees of complex autonomous behaviour. We develop an analysis tool to help to reason about such systems. Our work relies on tools from fluid and mean-field approximation to build decentralized algorithms that solve complex optimization problems. We focus on two problems: decentralized control of electric grids and capacity planning in vehicle-sharing systems to improve load balancing.
Social computing systems are online digital systems that use personal data of their users at their core to deliver personalized services directly to the users. They are omnipresent and include for instance recommendation systems, social networks, online medias, daily apps, etc. Despite their interest and utility for users, these systems pose critical challenges of privacy, security, transparency, and respect of certain ethical constraints such as fairness. Solving these challenges involves a mix of measurement and/or audit to understand and assess issues, and modeling and optimization to propose and calibrate solutions.
We try to keep the carbon footprint of the team has low as possible by a stricter laptop renewal policy and by reducing plane travels (e.g., using visioconference or sometimes by avoiding publishing our research in conferences that would take place on the other side of the planet).
Our team does not train heavy ML models requiring important processing power although some of us perform computer science experiments, mostly using the Grid5000 platforms. We keep this usage very reasonable and rely on cheaper alternatives (e.g., simulations) as much as possible.
Digital Transformation DU. He has published several articles on the issue of "usability" of artificial intelligence.
He is also co-creator of the sustainable AI transversal axis of the MIAI project in Grenoble. He connects his professionnal activity with public action (Lowtechlab de Grenoble, Université Autogérée, Arche des Innovateurs, etc). Finally, he is a trainer for the "Fresque du Climat" and a member of Adrastia and FNE Isère.
Numérique et Sciences Informatiques, NSI : les fondamentaux MOOC. See section 11.2.1 for more details.
Victor Boone and Panayotis Mertikopoulos have received a Spotlight at the NIPS conference for theeir article on The equivalence of dynamic and strategic stability under regularized learning in games 13.
SimGrid is a toolkit that provides core functionalities for the simulation of distributed applications in heterogeneous distributed environments. The simulation engine uses algorithmic and implementation techniques toward the fast simulation of large systems on a single machine. The models are theoretically grounded and experimentally validated. The results are reproducible, enabling better scientific practices.
Its models of networks, cpus and disks are adapted to (Data)Grids, P2P, Clouds, Clusters and HPC, allowing multi-domain studies. It can be used either to simulate algorithms and prototypes of applications, or to emulate real MPI applications through the virtualization of their communication, or to formally assess algorithms and applications that can run in the framework.
The formal verification module explores all possible message interleavings in the application, searching for states violating the provided properties. We recently added the ability to assess liveness properties over arbitrary and legacy codes, thanks to a system-level introspection tool that provides a finely detailed view of the running application to the model checker. This can for example be leveraged to verify both safety or liveness properties, on arbitrary MPI code written in C/C++/Fortran.
There were 2 major releases in 2023. On modeling aspects, we released new plugins simulating chiller, photovoltaic and battery components of Fog/Edge infrastructures, as well as the disk arrays used in desegregated infrastructures. We improved the consistency of the simulation core: the new ActivitySet containers now make it easy to way for the completion of an heterogeneous set of activities (computation, communication, I/O, etc). The simulation of workflow and dataflow applications was also streamlined, with more examples, more documentation and less bugs. A new model of activities mixing disk I/O and network communication was introduced, to efficiently simulate accesses to remote disks. In addition, many efforts were put on the profiling of the software, leading to massive performance gains. We also pursued our efforts to improve the overall framework, through bug fixes, code refactoring and other software quality improvements. In particular, interfaces that were deprecated since almost a decade were removed to ease the maintenance burden on our community.
Many improvement occurred on the model-checker side too. We dropped the old experiments toward stateful verification of liveness properties to boost the development of stateless verification of safety properties. Our tool is simpler internally, and usable on all major operating systems. We modernized the reduction algorithms, implementing several recent algorithms of the literature and paving the way to the introduction of new ones. We also introduced a new module allowing to verify not only distributed applications, but also threaded applications.
marmoteCore is a C++ environment for modeling with Markov chains. It consists in a reduced set of high-level abstractions for constructing state spaces, transition structures and Markov chains (discrete-time and continuous-time). It provides the ability of constructing hierarchies of Markov models, from the most general to the particular, and equip each level with specifically optimized solution methods.
This software was started within the ANR MARMOTE project: ANR-12-MONU-00019.
The tool accepts three model types:
- homogeneous population processes (HomPP)
- density dependent population processes (DDPPs)
- heterogeneous population models (HetPP)
In particular, it provides a numerical algorithm to compute the constant of the refined mean field approximation provided in the paper "A Refined Mean Field Approximation" by N. Gast and B. Van Houdt, SIGMETRICS 2018, and a framework to compute heterogeneous mean field approximations as proposed in "Mean Field and Refined Mean Field Approximations for Heterogeneous Systems: It Works!" by N. Gast and S. Allmeier, SIGMETRICS 2022.
The new results produced by the team in 2023 can be grouped into the following categories.
Visualization strategies are a valuable tool in the performance evaluation of HPC applications. Although the traditional Gantt charts are a widespread and enlightening strategy, it presents scalability problems and may misguide the analysis by focusing on resource utilization alone. In 16, we propose an overview strategy to indicate nodes of interest for further investigation with classical visualizations like Gantt charts. For this, it uses a progression metric that captures work done per node inferred from the task-based structure, a time-step clustering of those metrics to decrease redundant information, and a more scalable visualization technique. We demonstrate with six scenarios and two applications that such a strategy can indicate problematic nodes more straightforwardly while using the same visualization space. Also, we provide examples where it correctly captures application work progression, showing application problems earlier and as an easy way to compare nodes. At the same time that traditional methods are misleading.
This work completes our previous work on performance analysis of task-based applications on heterogeneous platforms and is part of the PhD thesis of Lucas Leandro Nesi 27. It will be pursued in the WP5 (Performance analysis and prediction) of the ExaSoft pillar (High Performance Computing software and tools) of the PEPR NumPEx (Numérique Hautes Performances pour l'Exascale). The rest of the thesis of Lucas Leandro Nesi is more related to performance optimization (through algorithmic and reinforcement learning techniques) and evaluation (through predictive simulation and real experiments). A particular effort has been devoted to the reproducibility of the results through the opening of both the data, the code, and the underlying methodology.
Mean field approximation is a powerful technique which has been used in many settings to study large-scale stochastic systems. Some of our latest developments have been transfered in the open source project rmf_tool7.1.6. In the case of two-timescale systems, the approximation is obtained by a combination of scaling arguments and the use of the averaging principle. In 1, we analyze the approximation error of this `average' mean field model for a two-timescale model
Finally, the PhD thesis of Thomas Barzola 22 presents a modular approach to compare optimization methods for bike sharing systems. Bike Sharing Systems (BSSs) are nowadays installed in many cities. In such a system, a user can take any available bike and return it to wherever there is an available parking spot. The Operations Research literature contains many papers that study optimization questions related to BSS, and in particular how to maximize the availability of bikes where and when the users need them. Yet, the optimization methods proposed by these papers are difficult to compare because most papers use their own problem instances and define their own metrics. This thesis aims to fill this gap by building a reproducible research methodology for BSSs. In this work, we divide this methodology in four modules: use of historical data, demand estimation, optimization methods, and performance evaluation. We study each module separately. In each case, we propose a prototype implementation and compare existing solutions when they are available.The first module handles the use of data from real systems. For many systems, two types of data are usually available: trips made by users, and records of the number of bikes available in each station. We observe that in general these data are inconsistent and we propose a method to correct this and detect relocation operations. The second module is the demand estimation one. In optimizing a BSS, it is essential to estimate the demand of the users for whom the system is designed. Most of the optimization works in the literature use historical demand to estimate the demand of the system. We experiment with the few existing methods of the literature along with a newly introduced method to detect censored demand. The third module is bike availability optimization. We implement a published optimization algorithm for this module as an example. We illustrate the challenges of reproducible research by trying to replicate the results. This chapter shows that, although the original authors made the data about their experiments available, we did not get the same quantitative results as the original publication. This difference highlights the need for better publication standards to produce more reproducible results. Finally, our fourth and last module is used to validate the optimization methods implemented in the 3rd module. We advocate that a simulator having all the requirements (user behavior models, demand scenarios, management strategies, etc.) can be a validation model. We use a third-party simulator to illustrate this module.We observe throughout this thesis that making research reproducible is not always handled with due diligence while being fundamental to produce valuable knowledge. In this work, we try our best efforts to specify and provide reproducible tools to ensure that researchers could obtain the same results with the same data. We give links to the data, codes, environments and analyses needed to reproduce the experiments.
In 4, we optimize the scheduling of Deep Learning training jobs from the perspective of a Cloud Service Provider running a data center, which efficiently selects resources for the execution of each job to minimize the average energy consumption while satisfying time constraints. To model the problem, we first develop a Mixed-Integer Non-Linear Programming formulation. Unfortunately, the computation of an optimal solution is prohibitively expensive, and to overcome this difficulty, we design a heuristic STochastic Scheduler (STS). Exploiting the probability distribution of early termination, STS determines how to adapt the resource assignment during the execution of the jobs to minimize the expected energy cost while meeting the job due dates. The results of an extensive experimental evaluation show that STS guarantees significantly better results than other methods in the literature, effectively avoiding due date violations and yielding a percentage total cost reduction between 32% and 80% on average. We also prove the applicability of our method in real-world scenarios, as obtaining optimal schedules for systems of up to 100 nodes and 400 concurrent jobs requires less than 5 seconds. Finally, we evaluated the effectiveness of GPU sharing, i.e., running multiple jobs in a single GPU. The obtained results demonstrate that depending on the workload and GPU memory, this further reduces the energy cost by 17-29% on average.
Multi-Armed Bandits are a fundamental model for problems in which a decision maker has to iteratively select one of multiple fixed alternatives (i.e. arms or actions) when the reward of each choice is only partially known at the time of decision and is learned as as the decision maker interacts with the bandits. The regret of a strategy is the expectation of the sum of the collected rewards minus the expectation of the optimal reward (the one corresponding to the arm with the larger reward). Markov Decision Processes (MDP) provide a framework for modeling situations where the state of a system (and its associated reward) evolves partly random and partly under the control of the decision maker. The reward depends on the current state of the machine, but good policies can be computed (e.g., using dynamic programming although it can be computationally unreasonable) when the system is fully known upfront. We have considered the intermediate situation of restless and restful bandits where each arm corresponds to an independent Markov chain but where neither the chain nor the associated reward is initially knows. Each time a particular arm is played, the state of that chain advances to a new one, chosen according to the Markov state evolution probabilities. In the restless bandits problem, the states of non-played arms can also evolve over time.
Whittle index is a generalization of Gittins index that provides very efficient allocation rules for restless multi-armed bandits. In 5, we develop an algorithm to test the indexability and compute the Whittle indices of any finite-state restless bandit arm. This algorithm works in the discounted and non-discounted cases, and can compute Gittins index. Our algorithm builds on three tools: (1) a careful characterization of Whittle index that allows one to compute recursively the
This work is part of the PhD thesis of Kimang Kuhn 25, where it was shown that no learning algorithms can perform uniformly well over the general class of restless bandits, and where several strategies for restful bandits have also been studied,
In 6, we evaluate the performance of Whittle index policy for restless Markovian bandit. It is shown in Weber and Weiss 110 that if the bandit is indexable and the associated deterministic system has a global attractor fixed point, then the Whittle index policy is asymptotically optimal in the regime where the arm population grows proportionally with the number of activation arms. In this paper, we show that, under the same conditions, this convergence rate is exponential in the arm population, unless the fixed point is singular, which almost never happens in practice. Our result holds for the continuous-time model of Weber and Weiss (1990) and for a discrete-time model in which all bandits make synchronous transitions. Our proof is based on the nature of the deterministic equation governing the stochastic system: We show that it is a piecewise affine continuous dynamical system inside the simplex of the empirical measure of the arms. Using simulations and numerical solvers, we also investigate the singular cases, as well as how the level of singularity influences the (exponential) convergence rate. We illustrate our theorem on a Markovian fading channel model.
In 7, we also provide a framework to analyse control policies for the restless Markovian bandit model, under both finite and infinite time horizon. We show that when the population of arms goes to infinity, the value of the optimal control policy converges to the solution of a linear program (LP). We provide necessary and sufficient conditions for a generic control policy to be: i) asymptotically optimal; ii) asymptotically optimal with square root convergence rate; iii) asymptotically optimal with exponential rate. We then construct the LP-index policy that is asymptotically optimal with square root convergence rate on all models, and with exponential rate if the model is non-degenerate in finite horizon, and satisfies a uniform global attractor property in infinite horizon. We next define the LP-update policy, which is essentially a repeated LP-index policy that solves a new linear program at each decision epoch. We provide numerical experiments to compare the efficiency of LP-based policies. We compare the performance of the LP-index policy and the LP-update policy with other heuristics. Our result demonstrates that the LP-update policy outperforms the LP-index policy in general, and can have a significant advantage when the transition matrices are wrongly estimated.
Although regret is a common objective in Reinforcement Learning, other criteria are relevant and allow to better understand or discriminate algorithms.
The first contribution of 12 is the introduction of a new performance measure of a RL algorithm that is more discriminating than the regret, that we call the regret of exploration that measures the asymptotic cost of exploration. The second contribution is a new performance test (PT) to end episodes in RL optimistic algorithms. This test is based on the performance of the current policy with respect to the best policy over the current confidence set. This is in contrast with all existing RL algorithms whose episode lengths are only based on the number of visits to the states. This modification does not harm the regret and brings an additional property. We show that while all current episodic RL algorithms have a linear regret of exploration, our method has a
In 11, we investigate a new learning problem, the identification of Blackwell optimal policies on deterministic MDPs (DMDPs): A learner has to return a Blackwell optimal policy with fixed confidence using a minimal number of queries. First, we characterize the maximal set of DMDPs for which the identification is possible. Then, we focus on the analysis of algorithms based on product-form confidence regions. We minimize the number of queries by efficiently visiting the state-action pairs with respect to the shape of confidence sets. Furthermore, these confidence sets are themselves optimized to achieve better performance. The performance of our method compares to the lower bound up to a factor
In 14, we propose the first model-free algorithm that achieves low regret performance for decentralized learning in two-player zerosum tabular stochastic games with infinite-horizon average-reward objective. In decentralized learning, the learning agent controls only one player and tries to achieve low regret performances against an arbitrary opponent. This contrasts with centralized learning where the agent tries to approximate the Nash equilibrium by controlling both players. In our infinite-horizon undiscounted setting, additional structure assumptions is needed to provide good behaviors of learning processes : here we assume for every strategy of the opponent, the agent has a way to go from any state to any other. This assumption is the analogous to the "communicating" assumption in the MDP setting. We show that our Decentralized Optimistic Nash Q-Learning (DONQ-learning) algorithm achieves both sublinear high probability regret of order 3/4 and sublinear expected regret of order 2/3. Moreover, our algorithm enjoys a low computational complexity and low memory space requirement compared to the previous works in the same setting.
Finally, in 30, we present an efficient reinforcement learning algorithm that learns the optimal admission control policy in a partially observable queueing network. Specifically, only the arrival and departure times from the network are observable, and optimality refers to the average holding/rejection cost in infinite horizon. While reinforcement learning in Partially Observable Markov Decision Processes (POMDP) is prohibitively expensive in general, we show that our algorithm has a regret that only depends sub-linearly on the maximal number of jobs in the network, diameter of the underlying Markov Decision Process (MDP), which in most queueing systems is at least exponential in
This work is part of the PhD thesis of Louis-Sebastien Rebuffi 29 and allows to propose reinforcement learning algorithms for controlled queueing systems that demonstrate a weak dependence on the state space compared to results obtained in the general case.
Learning in games naturally occurs in situations where the resources or the decision is distributed among several agents or even in situations where a centralised decision maker has to adapt to strategic users. Yet, it is considerably more difficult than in classical minimization games as the resulting equilibria may be attractive or not and the dynamic often exhibit cyclic behaviors.
A wide array of modern machine learning applications – from adversarial models to multi-agent reinforcement learning – can be formulated as non-cooperative games whose Nash equilibria represent the system's desired operational states. Despite having a highly non-convex loss landscape, many cases of interest possess a latent convex structure that could potentially be leveraged to yield convergence to an equilibrium. Driven by this observation, we propose in 20 a flexible first-order method that successfully exploits such "hidden structures" and achieves convergence under minimal assumptions for the transformation connecting the players' control variables to the game's latent, convex-structured layer. The proposed method – which we call preconditioned hidden gradient descent (PHGD) – hinges on a judiciously chosen gradient preconditioning scheme related to natural gradient methods. Importantly, we make no separability assumptions for the game's hidden structure, and we provide explicit convergence rate guarantees for both deterministic and stochastic environments.
In 13, we show the equivalence of dynamic and strategic stability under regularized learning in games by examining the long-run behavior of regularized, no-regret learning in finite games. A well-known result in the field states that the empirical frequencies of no-regret play converge to the game's set of coarse correlated equilibria; however, our understanding of how the players' actual strategies evolve over time is much more limited - and, in many cases, non-existent. This issue is exacerbated further by a series of recent results showing that only strict Nash equilibria are stable and attracting under regularized learning, thus making the relation between learning and pointwise solution concepts particularly elusive. In lieu of this, we take a more general approach and instead seek to characterize the setwise rationality properties of the players' day-to-day play. To that end, we focus on one of the most stringent criteria of setwise strategic stability, namely that any unilateral deviation from the set in question incurs a cost to the deviator - a property known as closedness under better replies (club). In so doing, we obtain a far-reaching equivalence between strategic and dynamic stability: a product of pure strategies is closed under better replies if and only if its span is stable and attracting under regularized learning. In addition, we estimate the rate of convergence to such sets, and we show that methods based on entropic regularization (like the exponential weights algorithm) converge at a geometric rate, while projection-based methods converge within a finite number of iterations, even with bandit, payoff-based feedback.
In 3, we examine the long-run behavior of multi-agent online learning in games that evolve over time. Specifically, we focus on a wide class of policies based on mirror descent, and we show that the induced sequence of play (a) converges to Nash equilibrium in time-varying games that stabilize in the long run to a strictly monotone limit; and (b) it stays asymptotically close to the evolving equilibrium of the sequence of stage games (assuming they are strongly monotone). Our results apply to both gradient-based and payoff-based feedback – i.e., when players only get to observe the payoffs of their chosen actions.
In 9, we develop a flexible stochastic approximation framework for analyzing the long-run behavior of learning in games (both continuous and finite). The proposed analysis template incorporates a wide array of popular learning algorithms, including gradient-based methods, the exponential / multiplicative weights algorithm for learning in finite games, optimistic and bandit variants of the above, etc. In addition to providing an integrated view of these algorithms, our framework further allows us to obtain several new convergence results, both asymptotic and in finite time, in both continuous and finite games. Specifically, we provide a range of criteria for identifying classes of Nash equilibria and sets of action profiles that are attracting with high probability, and we also introduce the notion of coherence, a game-theoretic property that includes strict and sharp equilibria, and which leads to convergence in finite time. Importantly, our analysis applies to both oracle-based and bandit, payoff-based methods – that is, when players only observe their realized payoffs.
This work is part of the PhD thesis of Yu Guan Hsieh 23 entitled Decision-Making in multi-agent systems: delays, adaptivity, and learning in games, and which has investigated separately two critical aspects of multi-agent systems: the impact of delays and the interactions among agents with non-aligned interests.
Although the games we generally consider for learning have nothing to do with the quantum world, they often involve probabilities (to account for the uncertainty of the agents or of the nature) and semi-definite programming (e.g., when dealing with the optimization of MIMO antennas). Quantum games have thus been a natural target for which we have proposed several contributions.
Recent developments in domains such as non-local games, quantum interactive proofs, and quantum generative adversarial networks have renewed interest in quantum game theory and, specifically, quantum zero-sum games. Central to classical game theory is the efficient algorithmic computation of Nash equilibria, which represent optimal strategies for both players. In 2008, Jain and Watrous proposed the first classical algorithm for computing equilibria in quantum zerosum games using the Matrix Multiplicative Weight Updates (MMWU) method to achieve a convergence rate of
In 17, we study the problem of learning in quantum games and other classes of semidefinite games-with scalar, payoff-based feedback. For concreteness, we focus on the widely used matrix multiplicative weights (MMW) algorithm and, instead of requiring players to have full knowledge of the game (and/or each other's chosen states), we introduce a suite of minimal-information matrix multiplicative weights (3MW) methods tailored to different information frameworks. The main difficulty to attaining convergence in this setting is that, in contrast to classical finite games, quantum games have an infinite continuum of pure states (the quantum equivalent of pure strategies), so standard importance-weighting techniques for estimating payoff vectors cannot be employed. Instead, we borrow ideas from bandit convex optimization and we design a zeroth-order gradient sampler adapted to the semidefinite geometry of the problem at hand. As a first result, we show that the 3MW method with deterministic payoff feedback retains the
Finally, in 18, we study the equilibrium convergence and stability properties of the widely used matrix multiplicative weights (MMW) dynamics for learning in general quantum games. A key difficulty in this endeavor is that the induced quantum state dynamics decompose naturally into (i) a classical, commutative component which governs the dynamics of the system's eigenvalues in a way analogous to the evolution of mixed strategies under the classical replicator dynamics; and (ii) a non-commutative component for the system's eigenvectors. This non-commutative component has no classical counterpart and, as a result, requires the introduction of novel notions of (asymptotic) stability to account for the nonlinear geometry of the game's quantum space. In this general context, we show that (i) only pure quantum equilibria can be stable and attracting under the MMW dynamics; and (ii) as a partial converse, pure quantum states that satisfy a certain "variational stability" condition are always attracting. This allows us to fully characterize the structure of quantum Nash equilibria that are stable and attracting under the MMW dynamics, a fact with significant implications for predicting the outcome of a multi-agent quantum learning process.
Variational inequalities – and, in particular, stochastic variational inequalities – have recently attracted considerable attention in machine learning and learning theory as a flexible paradigm for "optimization beyond minimization", i.e., for problems where finding an optimal solution does not necessarily involve minimizing a loss function.
Many modern machine learning applications – from online principal component analysis to covariance matrix identification and dictionary learning – can be formulated as minimization problems on Riemannian manifolds, and are typically solved with a Riemannian stochastic gradient method (or some variant thereof). However, in many cases of interest, the resulting minimization problem is not geodesically convex, so the convergence of the chosen solver to a desirable solution – i.e., a local minimizer – is by no means guaranteed. In 15, we study precisely this question, that is, whether stochastic Riemannian optimization algorithms are guaranteed to avoid saddle points with probability 1. For generality, we study a family of retraction-based methods which, in addition to having a potentially much lower per-iteration cost relative to Riemannian gradient descent, include other widely used algorithms, such as natural policy gradient methods and mirror descent in ordinary convex spaces. In this general setting, we show that, under mild assumptions for the ambient manifold and the oracle providing gradient information, the policies under study avoid strict saddle points / submanifolds with probability 1, from any initial condition. This result provides an important sanity check for the use of gradient methods on manifolds as it shows that, almost always, the limit state of a stochastic Riemannian algorithm can only be a local minimizer.
In 31, we examine the last-iterate convergence rate of Bregman proximal methods - from mirror descent to mirror-prox and its optimistic variants - as a function of the local geometry induced by the prox-mapping defining the method. For generality, we focus on local solutions of constrained, non-monotone variational inequalities, and we show that the convergence rate of a given method depends sharply on its associated Legendre exponent, a notion that measures the growth rate of the underlying Bregman function (Euclidean, entropic, or other) near a solution. In particular, we show that boundary solutions exhibit a stark separation of regimes between methods with a zero and non-zero Legendre exponent: the former converge at a linear rate, while the latter converge, in general, sublinearly. This dichotomy becomes even more pronounced in linearly constrained problems where methods with entropic regularization achieve a linear convergence rate along sharp directions, compared to convergence in a finite number of steps under Euclidean regularization.
Random matrix theory has recently proven to be a very effective tool to understand Machine Learning challenges. In particular, concentration results can be used to derive more efficient and frugal algorithms.
The PhD thesis of Minh-Toan Nguyen 28 has provided a nice overview with a deep perspective on replica method and asymptotic equivalence. Replica method is a favorite tool of physicists for studying large disordered systems. Although the method is highly non-rigorous, it can solve difficult problems across various domains: random matrix theory, convex optimization, combinatorial optimization, Bayesian inference, etc. The method has been successfully used to analyze theoretical models in wireless communication and machine learning. The rigorous alternatives for the replica method include the method of deterministic equivalents in random matrix theory, the objective method in combinatorial optimization, and the CGMT (convex Gaussian min-max theorem) in random convex optimization. Although these methods works in different domains, they offer one common insight: the asymptotic equivalence, which tells us that the large system under study is equivalent to a simpler system. As a result, many difficult computations on the original system can be done more easily on the equivalent system. In contrast, with the replica method, the insights come after the calculations. We start by writing down what we want to compute and then proceed to get the answer at the end. After calculating various quantities related to the system, with some observations and a good intuition, we may uncover the equivalent system. In this thesis, we show that the asymptotic equivalent of a disordered system can be obtained directly through the replica formalism by paying attention to the large-deviation computations lurking behind the replica computations. In other words, we develop a version of the replica method that can directly compute the asymptotic equivalent of a disordered system. This version of the replica method, which fits into the same framework of deterministic equivalence as the rigorous methods above, can compute the deterministic equivalents of random matrices, formally derive the CGMT, and solve problems in high-dimensional Bayesian statistics. Moreover, it can derive results on the Sherrington-Kirkpatrick model in a clear and simple manner. In this version of the replica method, each disordered system is associated with an object called “the replica density”. By De Finetti's theorem, a disordered system can be recovered from its replica density. To compute the asymptotic equivalent of a disordered system, we compute the equivalent of its replica density, using a result that we derive from the fundamental Gibbs principle. We thus obtain another replica density, which corresponds to another disordered system. This system is asymptotically equivalent to the original disordered system.
The general deployment of machine-learning systems in many domains ranging from security to recommendation and advertising to guide strategic decisions leads to an interesting line of research from a game theory perspective. In this context, fairness, discrimination, and privacy are particularly important issues.
In 32, we study statistical discrimination in matching, where multiple decision-makers are simultaneously facing selection problems from the same pool of candidates. We propose a model where decision-makers observe different, but correlated estimates of each candidate's quality. The candidate population consists of several groups that represent gender, ethnicity, or other attributes. The correlation differs across groups and may, for example, result from noisy estimates of candidates' latent qualities, a weighting of common and decision-maker specific evaluations, or different admission criteria of each decision maker. We show that lower correlation (e.g., resulting from higher estimation noise) for one of the groups worsens the outcome for all groups, thus leading to efficiency loss. Further, the probability that a candidate is assigned to their first choice is independent of their group. In contrast, the probability that a candidate is assigned at all depends on their group, and — against common intuition — the group that is subjected to lower correlation is better off. The resulting inequality reveals a novel source of statistical discrimination.
In 8, we conducted a large number of controlled continuous double auction experiments to reproduce and stress-test the phenomenon of convergence to competitive equilibrium under private information with decentralized trading feedback. Our main finding is that across a total of 104 markets (involving over 1,700 subjects), convergence occurs after a handful of trading periods. Initially, however, there is an inherent asymmetry that favors buyers, typically resulting in prices below equilibrium levels. Analysis of over 80,000 observations of individual bids and asks helps identify empirical ingredients contributing to the observed phenomena including higher levels of aggressiveness initially among buyers than sellers.
This work is part of the PhD thesis of Simon Jantschgi 24 on market design for double auctions.
Individual behavior such as choice of fashion, adoption of new products, and selection of means of transport is influenced by taking account of others' actions. In 10, we study social influence in a heterogeneous population and analyze the behavior of the dynamic processes. We distinguish between two information regimes: (i) agents are influenced by the adoption ratio, (ii) agents are influenced by the usage history. We identify the stable equilibria and long-run frequencies of the dynamics. We then show that the two processes generate qualitatively different dynamics, leaving characteristic 'footprints'. In particular, (ii) favors more extreme outcomes than (i).
In 19, we consider the problem of online allocation subject to a long-term fairness penalty. Contrary to existing works, however, we do not assume that the decision-maker observes the protected attributes – which is often unrealistic in practice. Instead they can purchase data that help estimate them from sources of different quality; and hence reduce the fairness penalty at some cost. We model this problem as a multi-armed bandit problem where each arm corresponds to the choice of a data source, coupled with the online allocation problem. We propose an algorithm that jointly solves both problems and show that it has a regret bounded by
Finally, fairness has also been studied in the PhD thesis of Till Kletti 26, in the context of multistakeholder recommendation platforms. The object of study of this thesis is the ranking of potentially relevant objects in response to an information request, for example when using a search engine or in the case of online con-tent recommendation. Such a ranking brings together two groups: users searching for relevant information, and content producers, whose goal is to make the produced information visible.For example, when searching for restaurants, the user is interested in seeing good restaurants,while the interest of the restaurant owners is to be seen by many people, in order to attract customers. The objects to be ranked are thus competing with each other and it is in the interest of the platform generating the rankings to ensure that the exposure allocated to the objects is fairly distributed. Obviously there are many possibilities of defining what fair means and none of them will be unanimously agreed upon. Therefore in this thesis the definition of fairness is taken as a parameter represented by a vector of merit, which determines the proportion with which visibility should be distributed amongst the items. This will make our method applicable to a wide range of possible definitions.Two things then become apparent. First, there does not in general exists ranking that is fair in the sense of proportionality of exposure to merit. It is therefore necessary to produce several rankings that compensate each other in order to give, on average, fair exposures to the items.Secondly, these rankings do not generally give maximum utility to the user. Indeed, to guarantee fairness, less relevant objects could potentially be shown to him. These two objectives, fairness and utility, are thus not simultaneously optimizable. The contribution of this thesis is to develop methods to determine Pareto optimal ranking sequences, i.e. such that it is not possible to improve one of the two objectives without deteri-orating the other. The idea is that this would make it possible for a qualified decision maker to make an informed choice about an adequate trade-off between user utility and fairness amongst items.The determination of these optimal sequences is accomplished via the introduction of a geometric object, a polytope named expohedron. This polytope expresses the set of average exposures attainable with ranking sequences and is therefore a good decision space for both fairnessand utility. The expohedron makes it possible to compute these optimal ranking sequences using only mathematically exact geometric constructions inside it, and this in a significantly faster way than previous methods based on linear programming. Moreover, the proposed method is applicable to two large classes of exposure models including Position Based Model (PBM) and Dynamic Bayesian Network (DBN) models to which linear programming is not applicable.
Patrick Loiseau has a Cifre contract with Naver labs (2020-2023) on "Fairness in multi-stakeholder recommendation platforms”, which supports the PhD student Till Kletti.
Nicolas Gast obtained a grant from Enedis to evaluate the performance of the PLC-G3 protocol. This grant supported the post-doc of Henry-Joseph Audeoud.
Projects indicated with a
Adaptive Learning for Interactive Agents and Systems[284K€]
Partners: Singapore University of Technology and Design (SUTD).
ALIAS is a bilateral PRCI (collaboration internationale) project joint with Singapore University of Technology and Design (SUTD), coordinated by Bary Pradelski (PI) and involving P. Mertikopoulos and P. Loiseau. The Singapore team consists of G. Piliouras and G. Panageas. The goal of the project is to provide a unified answer to the question of stability in multi-agent systems: for systems that can be controlled (such as programmable machine learning models), prescriptive learning algorithms can steer the system towards an optimum configuration; for systems that cannot (e.g., online assignment markets), a predictive learning analysis can determine whether stability can arise in the long run. We aim at identifying the fundamental limits of learning in multi-agent systems and design novel, robust algorithms that achieve convergence in cases where conventional online learning methods fail.
Refined Mean Field Optimization[250K€]
REFINO is an ANR starting grant (JCJC) coordinated by Nicolas Gast. The main objective on this project is to provide an innovative framework for optimal control of stochastic distributed agents. Restless bandit allocation is one particular example where the control that can be sent to each arm is restricted to an on/off signal. The originality of this framework is the use of refined mean field approximation to develop control heuristics that are asymptotically optimal as the number of arms goes to infinity and that also have a better performance than existing heuristics for a moderate number of arms. As an example, we will use this framework in the context of smart grids, to develop control policies for distributed electric appliances.
Fair algorithms via game theory and sequential learning[245K€]
FAIRPLAY is an ANR starting grant (JCJC) coordinated by Patrick Loiseau. Machine learning algorithms are increasingly used to optimize decision making in various areas, but this can result in unacceptable discrimination. The main objective of this project is to propose an innovative framework for the development of learning algorithms that respect fairness constraints. While the literature mostly focuses on idealized settings, the originality of this framework and central focus of this project is the use of game theory and sequential learning methods to account for constraints that appear in practical applications: strategic and decentralized aspects of the decisions and the data provided and absence of knowledge of certain parameters key to the fairness definition.
We have contributed to a French-speaking teacher training center dedicated to computer science education in high school.