Neo is an Inria project-team whose members are located in Sophia Antipolis (S. Alouf, K. Avrachenkov,
G. Neglia, and S. M. Perlaza), in Avignon (E. Altman) at Lia (Lab. of Informatics of Avignon) and in Montpellier (A. Jean-Marie).
E. Altman is also with the LINCS (Lab. for Information, Networking and Communication Sciences).
S. M. Perlaza is also with the ECE department at Princeton Univ., N.J. USA; and the Mathematics Department of the Univ. de la Polynésie française (Laboratoire GAATI), Faaa, Tahiti.
The team is positioned at the intersection of Operations Research and Network Science. By using the tools of Stochastic Operations Research, we model situations arising in several application domains, involving networking in one way or the other. The aim is to understand the rules and the effects in order to influence and control them so as to engineer the creation and the evolution of complex networks.
The problems studied in Neo involve generally optimization, dynamic systems or randomness, and often all at the same time. The techniques we use to tackle these problems are those of Stochastic Operations Research, Applied Probabilities and Information Theory.
Stochastic Operations Research is a collection of modeling, optimization and numerical computation techniques, aimed at assessing the behavior of man-made systems driven by random phenomena, and at helping to make decisions in such a context.
The discipline is based on applied probability and focuses on
effective computations and algorithms. Its core theory is that of
Markov chains over discrete state spaces. This family of stochastic
processes has, at the same time, a very large modeling capability and
the potential of efficient solutions. By “solution” is meant the
calculation of some performance metric, usually the
distribution of some random variable of interest, or its average,
variance, etc. This solution is obtained either through exact
“analytic” formulas, or numerically through linear algebra
methods. Even when not analytically or numerically tractable,
Markovian models are always amenable to “Monte-Carlo” simulations
with which the metrics can be statistically measured.
An example of this is the success of classical Queueing Theory,
with its numerous analytical formulas. Another important derived
theory is that of the Markov Decision Processes, which allows to
formalize optimal decision problems in a random environment.
This theory allows to characterize the optimal decisions, and provides
algorithms for calculating them.
Strong trends of Operations Research are: a) an increasing importance of multi-criteria multi-agent optimization, and the correlated introduction of Game Theory in the standard methodology; b) an increasing concern of (deterministic) Operations Research with randomness and risk, and the consequent introduction of topics like Chance Constrained Programming and Stochastic Optimization. Data analysis is also more and more present in Operations Research: techniques from statistics, like filtering and estimation, or Artificial Intelligence like clustering, are coupled with modeling in Machine Learning techniques like Q-Learning.
Network Science is a multidisciplinary body of knowledge, principally concerned with the emergence of global properties in a network of individual agents. These global properties emerge from “local” properties of the network, namely, the way agents interact with each other. The central model of “networks” is the graph (of Graph Theory/Operations Research). Nodes represent the different entities managing information and taking decisions, whereas, links represent the fact that entities interact, or not. Links are usually equipped with a “weight” that measures the intensity of such interaction. Adding evolution rules to this quite elementary representation leads to dynamic network models, the properties of which Network Science tries to analyze.
A classical example of properties sought in networks is the famous “six degrees of separation” (or “small world”) property: how and why does it happen so frequently? Another ubiquitous property of real-life networks is the Zipf or “scale-free” distribution for degrees. Some of these properties, when properly exploited, lead to successful business opportunities: just consider the PageRank algorithm of Google, which miraculously connects the relevance of some Web information with the relevance of the other information that points to it.
In its primary acceptation, Network Science involves little or no
engineering: phenomena are assumed to be “natural” and emerge
without external interventions. However, the idea comes fast to intervene in
order to modify the outcome of the phenomena.
This is where Neo is positioned.
Beyond the mostly descriptive approach of Network Science, we aim at
using the techniques of Operations Research so as to engineer complex
networks.
To quote two examples: controlling the spread of diseases through a “network” of people is of primarily interest for mankind. Similarly, controlling the spread of information or reputation through a social network is of great interest in the Internet. Precisely, given the impact of web visibility on business income, it is tempting (and quite common) to manipulate the graph of the web by adding links so as to drive the PageRank algorithm to a desired outcome.
Another interesting example is the engineering of community structures.
Recently, thousands of papers have been written on the topic of community
detection problem.
In most of the works, the researchers propose methods,
most of the time, heuristics, for detecting communities or dense subgraphs
inside a large network. Much less effort has been put in the understanding
of community formation process and even much less effort has been
dedicated to the question of how one can influence the process of community
formation, e.g. in order to increase overlap among communities and reverse
the fragmentation of the society.
Our ambition for the medium term is to reach an understanding of the behavior of complex networks that will make us capable of influencing or producing a certain property in a given network. For this purpose, we will develop families of models to capture the essential structure, dynamics, and uncertainty of complex networks. The “solution” of these models will provide the correspondence between metrics of interest and model parameters, thus opening the way to the synthesis of effective control techniques.
In the process of tackling real, very large size networks, we increasingly deal with large graph data analysis and the development of decision techniques with low algorithmic complexity, apt at providing answers from large datasets in reasonable time.
K. Avrachenkov has been invited by Australian Mathematical Science Institute (AMSI) and Australia and New Zealand Industrial and Applied Mathematics (ANZIAM) society to a AMSI-ANZIAM lecture tour to give a keynote at the ANZIAM Conference and a series of lectures at Australian Universities, February 5-23, 2023.
G. Neglia was recognized TPC (Technical Program Committee) distinguished member for IEEE International Conference on Computer Communications (INFOCOM) and top reviewer for the Thirty-seventh Annual Conference on Neural Information Processing Systems (NeurIPS)
S. Perlaza was re-appointed “Visiting Research Collaborator” in the Department of Electrical and Computer Engineering at Princeton University for the academic year 2023-2024. He was also re-appointed “Associate Researcher” in the Laboratory of Algebraic Geometry and Applications to Information Theory (GAATI) at the Université de la Polynésie Française for the academic year 2023-2024.
marmote is a C++ library for modeling with Markov chains. It consists in a reduced set of high-level abstractions for constructing state spaces, transition structures and Markov chains (discrete-time and continuous-time). It provides the ability of constructing hierarchies of Markov models, from the most general to the particular, and equip each level with specifically optimized solution methods. The current release features the library marmoteMDP for modeling Markov Decision Processes and solving them.
This software was started within the ANR MARMOTE project: ANR-12-MONU-00019 under the name marmoteCore. Within the marmote project, the code conforms the latest C++ standards and the library is available on multiple platforms via a conda distribution.
In 56, S. Perlaza, A. Jean-Marie and K. Sun studied classical zero-sum games under the following assumptions:
S. Perlaza, A. Jean-Marie, and E. Athanasakos are currently extending these results to more general classes of channels. In particular, Gaussian channels.
In 54, E. Altman, S. Perlaza, and A. Krishnan K.S. considered different pricing models for a platform-based rental system, such as Airbnb. A linear model is assumed for the demand response to price, and existence and uniqueness conditions for Nash equilibria are obtained. The Stackelberg equilibrium prices for the game are also obtained, and an iterative scheme is provided, which converges to the Nash equilibrium. Different cooperative pricing schemes are studied, and splitting of revenues based on the Shapley value is discussed. It is shown that a division of revenue based on the Shapley value gives a revenue to the platform proportional to its control of the market. The demand response function is modified to include user response to quality of service. It is shown that when the cost to provide quality of service is low, both renter and the platform will agree to maximize the quality of service. However, if this cost is high, they may not always be able to agree on what quality of service to provide. This work was presented as a conference in 32.
In 42 R. Taisant (INOCS), M. Datar, H. Le Cadre (INOCS), and E. Altman consider a peer-to-peer electricity market modeled as a network game, where End Users (EUs) minimize their cost by computing their demand and generation while satisfying a set of local and coupling constraints. The nominal demand of EUs constitutes sensitive information, that EUs might want to keep private. The authors of 42 prove that the network game admits a unique Variational Equilibrium, which depends on the private information of all the EUs. A data aggregator is introduced, which aims to learn the EUs’ private information. The EUs might have incentives to report biased and noisy readings to preserve their privacy, which creates shifts in their strategies. Relying on performative prediction, the authors define a decision-dependent game G
In 13 K. Avrachenkov in collaboration with E. Morozov and R. Nekrasova (Karelian Institute of Applied Mathematical Research, Russia) establish stability criterion for a two-class retrial system with Poisson inputs, general class-dependent service times and class-dependent constant retrial rates. They also characterize an interesting phenomenon of partial stability when one orbit is tight but the other orbit goes to infinity in probability. All theoretical results are illustrated by numerical experiments.
Motivated primarily by electric vehicles (EV) queueing at charging stations, B.R. Vinay Kumar studies multiple server queues on an Euclidean space. He considers overloaded servers. He evaluates the expected fraction of overloaded servers in the system for the one dimensional case (
Similarity caching allows requests for an item to be served by a similar item. Applications include recommendation systems, multimedia retrieval, and machine learning. Recently, many similarity caching policies have been proposed, like SIM-LRU (Similarity Least Recently Used) and its generalization RND-LRU (Random Least Recently Used), but the performance analysis of their hit ratio is still wanting. Y. Ben Mazziane, S. Alouf, and G. Neglia, together with D. S. Menasche (Federal Univ. of Rio de Janeiro, Brazil) are pursuing their effort to estimate the hit ratio of the similarity caching policy RND-LRU. They extend the popular time-to-live approximation in classic caching to similarity caching. They introduce the RND-TTL (Random Time-to-Live) approximation and the RND-TTL cache model and tune the model's parameters in such a way as to mimic the behavior of RND-LRU. The parameter tuning involves solving a fixed point system of equations for which they provide an algorithm for numerical resolution and sufficient conditions for its convergence.
In 17, R. Dhounchak and V. Kavitha (IIT Mumbai) in collaboration with E. Altman consider the inherent timeline structure of the appearance of content in online social networks (OSNs) while studying content propagation. They model the propagation of a post/content of interest by a multi-type branching process. The latter allows one to predict the emergence of global macro properties (e.g., the spread of a post in the network) from the laws and parameters that determine local interactions. The local interactions largely depend upon the timeline (an inverse stack capable of holding many posts and one dedicated to each user) structure and the number of friends (i.e., connections) of users, etc. They explore the use of multi-type branching processes to analyze the viral properties of the post, e.g., to derive the expected number of shares, the probability of virality of the content, etc. In OSNs, the new posts push down the existing contents in timelines, which can greatly influence content propagation; their analysis considers this influence. They find that one leads to draw incorrect conclusions when the timeline (TL) structure is ignored. One cannot capture some interesting paradigm shifts/phase transitions; for example, virality chances are not monotone with network activity parameter, as shown by analysis including TL influence.
Classical problems such as classification, pattern recognition, regression, and density estimation can be posed as special cases of the ERM problem. Unfortunately, ERM is prone to training data memorization, a phenomenon also known as overfitting. For this reason, ERM is often regularized in order to provide generalization guarantees. That is, to identify models using available training datasets that induce low empirical risk with respect to unseen datasets. At Neo special attention is payed to the study of the statistical properties of the solutions to ERM problems subject to particular regularizations. The main feature of this research effort is that contrary to the main stream in the community, this analysis is made for fixed training datasets, which provides new and insightful mathematical tools for the analysis of generalization capabilities of machine learning algorithms.
In 53, S. Perlaza and F. Daunas, together with I. Esnaola (Univ. of Sheffield) and H.V. Poor (Princeton Univ.) present the solution to the empirical risk minimization with Neo in 20 and 28.
in 20, S. Perlaza and A. Jean-Marie together with G. Bisson (Univ. de la Polynésie française), I. Esnaola (Univ. of Sheffield), and S. Rini (National Chiao Tung Univ.) have continued the study of ERM problem with relative entropy regularization (ERM-RER) under the assumption that the reference measure is a
In 52, S. Perlaza and F. Daunas, together with I. Esnaola (Univ. of Sheffield) and H.V. Poor (Princeton Univ.) study the effect of the relative entropy asymmetry in the empirical risk minimization with relative entropy regularization (ERM-RER) problem. A novel regularization is introduced, coined Type-II regularization, that allows for solutions to the ERM-RER problem with a support that extends outside the support of the reference measure. The solution to the new ERM-RER Type-II problem is analytically characterized in terms of the Radon-Nikodym derivative of the reference measure with respect to the solution. The analysis of the solution unveils the following properties of relative entropy when it acts as a regularizer in the ERM-RER problem: i) relative entropy forces the support of the Type-II solution to collapse into the support of the reference measure, which introduces a strong inductive bias that dominates the evidence provided by the training data; ii) Type-II regularization is equivalent to classical relative entropy regularization with an appropriate transformation of the empirical risk function. Closed-form expressions of the expected empirical risk as a function of the regularization parameters are provided. This work was presented as a conference in 28.
The expected generalization error (GE) is a central performance metric for the analysis of generalization capabilities of machine learning algorithms. In a nutshell, the GE characterizes the ability of a learning algorithm to correctly find patterns in datasets that are not available during the training stage. Specifically, it is defined for a fixed training dataset and a specific model instance, as the difference between the population risk induced by the model and the empirical risk with respect to the training dataset. At Neo our research focuses on the search of closed-form expressions of the GE for specific algorithms under the assumption that datasets follow a specific probability distribution that is consistent with the training dataset.
In 58, S. Perlaza, E. Altman, and X. Zou, together with I. Esnaola (Univ. of Sheffield) have introduced the worst-case probability measure over the data as a tool for characterizing the generalization capabilities of machine learning algorithms. More specifically, the worst-case probability measure is a Gibbs probability measure and the unique solution to the maximization of the expected loss under a relative entropy constraint with respect to a reference probability measure. Fundamental generalization metrics, such as the sensitivity of the expected loss, the sensitivity of the empirical risk, and the generalization gap are shown to have closed-form expressions involving the worst-case data-generating probability measure. Existing results for the Gibbs algorithm, such as characterizing the generalization gap as a sum of mutual information and lautum information, up to a constant factor, are recovered. A novel parallel is established between the worst-case data-generating probability measure and the Gibbs algorithm. Specifically, the Gibbs probability measure is identified as a fundamental commonality of the model space and the data space for machine learning algorithms.
In 36, S. Perlaza and A. Jean-Marie together with G. Bisson (Univ. de la Polynésie française), I. Esnaola (Univ. of Sheffield), and H. V. Poor (Princeton Univ.) analytically characterized the dependence on training data of the Gibbs algorithm (GA). By adopting the expected empirical risk as the performance metric, the sensitivity of the GA is obtained in closed form. In this case, sensitivity is the performance difference with respect to an arbitrary alternative algorithm. This description enables the development of explicit expressions involving the training errors and test errors of GAs trained with different datasets. Using these tools, dataset aggregation is studied and different figures of merit to evaluate the generalization capabilities of GAs are introduced. For particular sizes of such datasets and parameters of the GAs, a connection between Jeffrey's divergence, training and test errors is established.
Online learning algorithms have been successfully used to design caching policies with regret guarantees. Existing algorithms assume that the cache knows the exact request sequence, but this may not be feasible in high load and/or memory-constrained scenarios, where the cache may have access only to sampled requests or to approximate requests' counters. In 27, Y. Ben Mazziane, F. Faticanti, G. Neglia, and S. Alouf propose the Noisy-Follow-the-Perturbed-Leader (NFPL) algorithm, a variant of the classic Follow-the-Perturbed-Leader (FPL) when request estimates are noisy, and they show that the proposed solution has sublinear regret under specific conditions on the requests estimator. The experimental evaluation compares the proposed solution against classic caching policies and validates the proposed approach under both synthetic and real request traces.
In 29, F. Faticanti and G. Neglia study a batched version of optimistic online algorithms for caching. The analysis consists of updating the cache state less frequently with respect to traditional caching algorithms that update the cache state after receiving every single new request. This new approach proposes to cumulate a batch of requests before updating the cache given the high computational complexity of online algorithms based on Follow-The-Regularized-Leader (FTRL). No-regret results are showed in this new setting and experimental results show that the batched versions of the online algorithms outperform traditional caching policies on both synthetic and real traces.
The abstract of the paper “Enabling Long-term Fairness in Dynamic Resource Allocation ” has been published in the Proceedings of the 2023 ACM SIGMETRICS International Conference on Measurement and Modeling of Computer Systems 40. The corresponding research activity is described in NEO activity report for 2022.
In 35 K. Avrachenkov in collaboration with T. Pagare and V. Borkar (IIT Bombay, India) extend the provably convergent Full Gradient DQN (Deep Q-Network) algorithm for discounted reward Markov decision processes from Avrachenkov et al. (2021) to average reward problems. They experimentally compare widely used RVI (Relative Value Iteration) Q-learning with recently proposed Differential Q-learning in the neural function approximation setting with Full Gradient DQN and DQN. They also extend this to learn Whittle indices for Markovian restless multi-armed bandits and observe a better convergence rate of the proposed Full Gradient variant across different tasks.
K. Avrachenkov in collaboration with V. Borkar and J. Nair (IIT Bombay, India) have organized and edited a special volume of Dynamic Games and Applications journal on the topic “Multi-Agent Dynamic Decision Making and Learning” 45. This field interests multiple communities such as dynamic games, control theory and machine learning, especially, reinforcement learning. 14 papers have been accepted and published in this special volume.
In 37, A. Rodio, F. Faticanti, O. Marfoq, and G. Neglia in collaboration with E. Leonardi (Polytechnic Univ. of Turin, Italy) provide the first convergence result for Federated Learning algorithms under heterogeneous and correlated client availability. Their analysis shows the negative impact of correlation on the algorithms' convergence rate and highlights a trade-off between optimization error (related to convergence speed) and bias error (indicative of model quality). Their proposed Correlation-Aware FL (CA-Fed) algorithm effectively balances convergence speed and model quality by adjusting client aggregation weights and selectively excluding highly correlated, low-availability clients. Throughout simulations, CA-Fed achieves higher time-average accuracy and reduced standard deviation compared to existing methods on both synthetic and real datasets.
Subsequently, in 21, A. Rodio, F. Faticanti, O. Marfoq, and G. Neglia in collaboration with E. Leonardi (Polytechnic Univ. of Turin, Italy) further enhance their CA-Fed algorithm by introducing a new hyper-parameter to optimize the trade-off between convergence speed and model quality. A sensitivity analysis confirms this parameter's impact on convergence. In contrast to 37, which relied on prior knowledge of clients' availability and correlation, they propose a Bayesian estimator with a beta prior, requiring only a limited amount of observations to outperform existing methods. Additionally, they address a gap in 37 by evaluating CA-Fed in spatially correlated scenarios, where the availability patterns are correlated among clients.
In 38, A. Rodio and G. Neglia in collaboration with F. Busacca and S. Palazzo (Univ. of Catania, Italy), S. Mangione and I. Tinnirello (Univ. of Palermo, Italy), and F. Restuccia (Northeastern Univ., USA) address training Federated Learning algorithms in wireless networks with packet losses. Contrary to conventional approaches that focus on mitigating packet losses through retransmission or error correction, they show that FL algorithms can effectively learn in asymmetric lossy channels while maintaining the same computational and communication efficiency. Their proposed algorithm, UPGA-PL (Unbiased Pseudo-Gradient Aggregation with Packet Losses), employs a pseudo-gradient step rather than model averaging, and adjusts the aggregation weights for heterogeneous packet losses. In experimental evaluation, UPGA-PL outperforms existing methods in lossy environments and matches Federated Learning algorithms in lossless scenarios within a limited number of communication rounds.
Most work on federated learning assumes that clients operate on static datasets collected before training starts. This approach may be inefficient because 1) it ignores new samples clients collect during training, and 2) it may require a potentially long preparatory phase for clients to collect enough data. Moreover, learning on static datasets may be simply impossible in scenarios with small aggregate storage across devices. It is, therefore, necessary to design federated algorithms able to learn from data streams. In 34 O. Marfoq and G. Neglia, together with L. Kameni and R. Vidal from (Accenture Labs, France) formulate and study the problem of federated learning for data streams. They propose a general FL algorithm to learn from data streams through an opportune weighted empirical risk minimization. Their theoretical analysis provides insights to configure such an algorithm, and they evaluate its performance on a wide range of machine learning tasks.
In 39, C. Kaplan in collaboration with A.S. de Oliveira (SAP Labs France), Khawla Mallat (SAP Labs France), and Tanmay Chakraborty (SAP Labs France, Eurecom) investigate the impact of differential privacy on fairness notions for tabular data. They empirically analyze how different fairness notions, belonging to distinct classes of statistical fairness criteria, are impacted when one selects a model architecture suitable for DP-SGD (differentially private stochastic gradient descent), optimized for utility. Using standard datasets from ML fairness literature, they show that by selecting the optimal model architecture for DP-SGD, the differences across groups concerning the relevant fairness metrics more often decrease or are negatively impacted, compared to the non-private baseline, for which the optimal model architecture has also been selected to maximize utility. These findings challenge the understanding that differential privacy will necessarily exacerbate unfairness in deep learning models trained on biased datasets.
Since 2020, S. Perlaza in collaboration with X. Ye, I. Esnaola, and R. Harrison (Univ. of Sheffield) have studied sparse stealth attack constructions that minimize the mutual information between the state variables and the observations. In 25, the attack construction is formulated as the design of a multivariate Gaussian distribution that aims to minimize the mutual information while limiting the Kullback-Leibler divergence between the distribution of the observations under attack and the distribution of the observations without attack. The sparsity constraint is incorporated as a support constraint of the attack distribution. Two heuristic greedy algorithms for the attack construction are proposed. The first algorithm assumes that the attack vector consists of independent entries, and therefore, requires no communication between different attacked locations. The second algorithm considers correlation between the attack vector entries and achieves a better disruption to stealth tradeoff at the cost of requiring communication between different locations. Numerical evaluations show that it is feasible to construct stealth attacks that generate significant disruption with a low number of compromised sensors.
In 26 K. Avrachenkov and B.R. Vinay Kumar in collaboration with K. Alaluusua and L. Leskelä (Aalto University, Finland) consider the community recovery problem on a multilayer variant of the hypergraph stochastic block model (HSBM). Each layer is associated with an independent realization of a
In 31, 60 K. Avrachenkov and L. Hauseux in collaboration with J. Zerubia (Ayana team) propose an original density estimator built from a cloud of points
In 33 Vijith Kumar K. P. together with B. Kumar Rai and T. Jacob (IIT Guwahati, India) consider the
In 18 F. Faticanti in collaboration with M. Savi (Univ. Milano-Bicocca, Italy), F. De Pellegrini (Univ. Avignon) and D. Siracusa (Fondazione Bruno Kessler, Italy) propose a new approach for the deployment of microservice-based applications in a Federated Fog Computing scenario under locality constraints for a subset of microservices of the application. The approach is based on a Breadth-First-Search (BFS) visit of the search space for the deployment of applications where the main objective is to minimize the deployment cost towards external fog domains with respect to the main provider. Experiments show that the proposed approach outperforms traditional deployment methods based on Depth-First-Search visits of the search space.
Modern portable devices can execute increasingly sophisticated AI models on sensed data. The complexity of such processing tasks is data-dependent and has relevant energy cost. In 30, A. Fox, F. De Pellegrini (Univ. Avignon) and E. Altman develop an Age of Information Markovian model for a system where multiple battery-operated devices perform data processing and energy harvesting in parallel. Part of their computational burden is offloaded to an edge server which polls devices at a given rate. The structural properties of the optimal policy for a single device-server system are derived. They permit to derive a new model-free reinforcement learning method specialized for monotone policies, namely Ordered Q-Learning, providing a fast procedure to learn the optimal policy. The method is oblivious to the devices’ battery capacities, the cost and the value of data batch processing and to the dynamics of the energy harvesting process. Finally, the polling strategy of the server is optimized by combining such policy improvement techniques with stochastic approximation methods. Extensive numerical results provide insight into the system properties and demonstrate that the proposed learning algorithms outperform existing baselines.
We have been extending our research on games in networks where users compete over resources, to the case in which competition arises also between the users and the supplier of services and further to the case when the suppliers compete between each other through the prices of their services. This bundelling of services called Network slicing (NS) is a key technology in 5G that enables the customization and efficient sharing of network resources to support the diverse requirements of next-generation services.
In 15, M. Datar, E. Altman and H. Le Cadre (INOCS) consider a marketplace in the context of 5G network slicing, where Application Service Providers (ASP), i.e., slice tenants, providing heterogeneous services, are in competition for the access to the virtualized network resource owned by a Network Slice Provider (NSP), who relies on network slicing. They model the interactions between the end users (followers) and the ASPs (leaders) as a Stackelberg game. They prove that the competition between the ASPs results in a multi-resource Tullock rent-seeking game. To determine resource pricing and allocation, They devise two innovative market mechanisms.
In 16, S. Dhamal, W. Ben-Ameur, and T. Chahed (Télécom SudParis), E. Altman, A. Sunny (ITT Palakkad), and S. Poojary (Qualcomm India) study a distributed computing setting wherein a central entity seeks power from computational providers by offering a certain reward in return. The computational providers are classified into long-term stakeholders that invest a constant amount of power over time and players that can strategize on their computational investment. In this paper, they model and analyze a stochastic game where players arrive and depart over time. They prove that, in Markov perfect equilibrium, only players with cost parameters in a relatively low range which collectively satisfy a certain constraint in a given state, invest. They infer that players need not have knowledge about the system state and other players’ parameters, if the total power that is being received by the central entity is communicated to the players as part of the system’s protocol. If players are homogeneous and the system consists of a reasonably large number of players, They observe that the total power received by the central entity is proportional to the offered reward and does not vary significantly despite the players’ arrivals and departures, thus resulting in a robust and reliable system. They study by simulations and mean field approximation, how the players’ utilities are influenced by the system parameters.
In 51, M. Datar (currently at Orange Innovation), N. Modina (CNAM), R. El Azouzi (Univ. Avignon), and E. Altman propose an allocation scheme for network slicing based on the Fisher-market model and the Trading-post mechanism. The scheme aims to achieve efficient resource utilization while ensuring multi-level fairness, dynamic load conditions, and the protection of service level agreements (SLAs) for slice tenants. In the proposed scheme, each service provider (SP) is allocated a budget representing its infrastructure share or purchasing power in the market.
The nature of fishing activities is such that marine habitats can be deteriorated when employing destructive fishing gear. This makes even more complex the determination of sustainable fishing policies and has led some authors to propose dynamic models which take into account this habitat degradation. In 19, A. Jean-Marie and M. Tidball (INRAE) analyze in detail one of these models, an extension of the single-species Gordon–Schaefer model to two state interrelated variables: stock of fish and habitat. The model assumes that stock and carrying capacity are positively linked, and that the fishing activity has a direct and negative impact on the carrying capacity. The authors extend and characterize Clark's most rapid approach optimal solution to this case.
Neo has contracts with
Accenture (see §8.1.1 and §8.1.2),
NSP SmartProfile (see §8.1.3),
Orange Labs (see §8.1.4),
QITI (see §8.1.6),
and SAP (see §8.1.5).
IoT applications will become one of the main sources to train data-greedy machine learning models. Until now, IoT applications were mostly about collecting data from the physical world and sending them to the Cloud. Google’s federated learning already enables mobile phones, or other devices with limited computing capabilities, to collaboratively learn a machine learning model while keeping all training data locally, decoupling the ability to do machine learning from the need to store the data in the cloud. While Google envisions only users’ devices, it is possible that part of the computation is executed at other intermediate elements in the network. This new paradigm is sometimes referred to as Edge Computing or Fog Computing. Model training as well as serving (provide machine learning predictions) are going to be distributed between IoT devices, cloud services, and other intermediate computing elements like servers close to base stations as envisaged by the Multi-Access Edge Computing framework. The goal of this project is to propose distributed learning schemes for the IoT scenario, taking into account in particular its communication constraints. O. Marfoq is funded by this project. A first 12-month pre-PhD contract has been followed by a PhD grant.
Deep neural networks have enabled impressive accuracy improvements across many machine learning tasks. Often the highest scores are obtained by the most computationally-hungry models. As a result, training a state-of-the-art model now requires substantial computational resources which demand considerable energy, along with the associated economic and environmental costs. Research and development of new models multiply these costs by thousands of times due to the need to try different model architectures and different hyper-parameters. In this project, we investigate a more algorithmic/system-level approach to reduce energy consumption for distributed ML training over the Internet. The postdoc of C. Rodriguez is funded by this project.
SmartProfile is a marketing platform that allows to collect, to enhance and to analyze marketing data. Digital marketing campaigns continue to expand across all digital channels and media. The 'mass marketing' strategies implemented by most companies show limits in terms of performance and acceptance by clients, as well as in terms of their impact on the environment. In opposite to these practices, we believe that current technologies, particularly in terms of Artificial Intelligence (AI), should make marketing interactions more efficient and virtuous. Through this research project, we want to create an alternative solution to mass marketing by switching to an intelligent, automated and eco-responsible system, which will support the heterogeneity of data and the diversity of sectors, and whose purpose is to recommend the best content by determining the most relevant target and taking into account the communication constraints. This contract complements the Cifre thesis of Ibtihal El Mimouni.
A Reconfigurable Intelligent Surface (RIS) is a programmable surface structure that allows one to control the reflection of electromagnetic (EM) waves by changing the electric and magnetic properties of the surface. In the absence of RIS, short wavelentghs signals as in 5G, are subject to a huge attenuation when there is no direct line of sight channel. Within our collaboration we shall evaluate and optimize the position of RIS.
This contract complements the Cifre thesis of J. Santos.
There are increasing concerns among scholars and the public about bias, discrimination, and fairness in AI and machine learning. Decision support systems may present biases, leading to unfair treatment of some categories of individuals, for instance, systematically assigning high risk of recidivism in a criminal offense analysis system. Essentially, the analysis of whether an algorithm’s output is fair (e.g. does not disadvantages a group with respect to others) depends on substantial contextual information that often requires human intervention. There are though several metrics for fairness that have been developed, which may rely on collecting additional sensitive attributes (e.g., ethnicity) before imposing strong privacy guarantees to be used in any situation. It is known that differential privacy has the effect of hiding outliers from the data analysis, perhaps compounding existing bias in certain situations. This project encompasses the search for a mitigating strategy. This contract complements the Cifre thesis of C. Kaplan.
Qiti is a start-up created in Nice in 2021 which among other things develops a Conversational Recommender Systems (CRS) for insurance holders and insurers. The CRS should reduce the load on the workers of the insurers and to simplify the process of insurance establishment and modification. The goal of the present cooperation is to test and to improve various Reinforcement Learning schemes for CRS. The post-doc of H. Manjunath is funded by this contract.
In many use-cases of Machine Learning (ML), data is naturally decentralized: medical data is collected and stored by different hospitals, crowdsensed data is generated by personal devices, etc. Federated Learning (FL) has recently emerged as a novel paradigm where a set of entities with local datasets collaboratively train ML models while keeping their data decentralized.
FedMalin is a research project that spans 10 Inria research teams and aims to push FL research and concrete use-cases through a multidisciplinary consortium involving expertise in ML, distributed systems, privacy and security, networks, and medicine. We propose to address a number of challenges that arise when FL is deployed over the Internet, including privacy and fairness, energy consumption, personalization, and location/time dependencies. FedMalin will also contribute to the development of open-source tools for FL experimentation and real-world deployments, and use them for concrete applications in medicine and crowdsensing. The FedMalin Inria Challenge is supported by Groupe La Poste, sponsor of the Inria Foundation.
K. Avrachenkov in collaboration with V. Borkar and J. Nair (IIT Bombay, India) have organized and edited a special volume of Dynamic Games and Applications journal on the topic “Multi-Agent Dynamic Decision Making and Learning” 45. See Section 7.3.4.
NEO members regularly perform reviews for journals such as IEEE/ACM Transactions on Networking, IEEE Transactions on Information Theory, IEEE Transactions on Wireless Communications, IEEE Transactions on Communications, IEEE Transactions on Network and Service Management, IEEE Transactions on Network Science and Engineering, Performance Evaluation, Elsevier Computer Communications, Elsevier Computer Networks.
S. Alouf was invited to write a technical perspective on a paper appearing in the Research Highlights section of the Communications of the ACM 59.