The statify team focuses on statistics.
Statistics can be defined as a science of variation where the main question is how to acquire knowledge in the face of variation.
In the past, statistics were seen as an opportunity to play in various backyards. Today, the statistician sees his own backyard invaded by data scientists, machine learners and other computer scientists of all kinds. Everyone wants to do data analysis and some (but not all) do it very well.
Generally, data analysis algorithms and associated network architectures are empirically validated using domain-specific datasets and data challenges. While winning such challenges is certainly rewarding, statistical validation lies on more fundamentally grounded bases and raises interesting theoretical, algorithmic and practical insights.
Statistical questions can be converted to probability questions by the use of probability models. Once certain assumptions about the mechanisms generating the data are made, statistical questions can be answered using probability theory. However, the proper formulation and checking of these probability models is just as important, or even more important, than the subsequent analysis of the problem using these models. The first question is then how to formulate and evaluate probabilistic models for the problem at hand. The second question is how to obtain answers after a certain model has been assumed. This latter task can be more a matter of applied probability theory, and in practice, contains optimization and numerical analysis.
The statify team aims at bringing strengths, at a time when the number of solicitations received by statisticians increases considerably because of the successive waves of big data, data science and deep learning. The difficulty is to back up our approaches with reliable mathematics while what we have is often only empirical observations that we are not able to explain. Guiding data analysis with statistical justification is a challenge in itself.
statify has the ambition to play a role in this task and to provide answers to questions about the appropriate usage of statistics.
Often statistical assumptions do not hold. Under what conditions then can we use statistical methods to obtain reliable knowledge? These conditions are rarely the natural state of complex systems. The central motivation of statify is to establish the conditions under which statistical assumptions and associated inference procedures approximately hold and become reliable.
However, as George Box said "Statisticians and artists both suffer from being too easily in love with their models". To moderate this risk, we choose to develop, in the team, expertise from different statistical domains to offer different solutions to attack a variety of problems. This is possible because these domains share the same mathematical food chain, from probability and measure theory to statistical modeling, inference and data analysis.
Our goal is to exploit methodological resources from statistics and machine learning to develop models that handle variability and that scale to high dimensional data while maintaining our ability to assess their correctness, typically the uncertainty associated with the provided solutions. To reach this goal, the team offers a unique range of expertise in statistics, combining probabilistic graphical models and mixture models to analyze structured data, Bayesian analysis to model knowledge and regularize ill-posed problems, non-parametric statistics, risk modeling and extreme value theory to face the lack, or impossibility, of precise modeling information and data. In the team, this expertise is organized to target five key challenges:
The first two challenges address sources of complexity coming from data, namely, the fact that observations can be:
1) high dimensional, collected from multiple sensors in varying conditions i.e. multimodal and heterogeneous and 2)
inter-dependent with a known structure between variables or with unknown interactions to be discovered.
The other three challenges focus on providing reliable and interpretable models:
3) making the Bayesian approach scalable to handle large and complex data;
4) quantifying the information processing properties of machine learning methods and 5) allowing to draw reliable conclusions from datasets that are too small or not large enough to be used for training machine/deep learning methods.
These challenges rely on our four research axes:
In terms of applied work, we will target high-impact applications in neuroimaging, environmental and earth sciences.
Graphs arise naturally as versatile structures for capturing the intrinsic organization of complex datasets. The literature on graphical modeling is growing rapidly and covers a wide range of applications, from bioinformatics to document modeling, image analysis, social network analysis, etc. When faced with multivariate, possibly high dimensional, data acquired at different sites (or nodes) and structured according to an underlying network (or graph),
the objective is generally to understand the dependencies or associations present in the data so as to provide a more accurate statistical analysis and a better understanding of the phenomenon under consideration.
This refers to the inference of the existing dependences between variables from observed samples.
The limits of obtaining graph edges using sample correlation between nodes is well known.
We have investigated alternative approaches,
both Bayesian and frequentist, the former were rather used to account for constraints on the structure while for the latter we focused on robust modeling and estimation in presence of outliers. We proposed a fast Bayesian structure learning based on pre-screening of categorical variables, in the PhD thesis of T. Rahier with Schneider Electric.
In the continuous variable case, we studied the design of tractable estimators and algorithms that can provide robust estimation of covariance structures. Many covariance estimation methods rely on the Gaussian graphical model but a viable model for data contaminated by outliers requires the use of
more robust and complex procedures and is therefore more challenging to build.
Then, the problem of robust structure learning is especially acute in the high-dimensional
setting, in which the number of variables
Once the structure is identified, the following questions are about comparing the discovered graph structures together, or with regards to a reference graph. If the structure is not itself the object of consideration, the goal is usually to account for it in a subsequent analysis. Except for simple graphs (chains or trees), this is problematic because mainstream statistical models and algorithms are based on the independence assumption and become intractable for even moderate graph sizes. The analysis of graphs as the objects of interest with the design of tools to model and compare them has been studied in the PhD of L. Carboni. We proposed new mathematical tools based on equivalence relation between graph statistics in order to be able to take into account the location in space of the nodes. To account for dependences in a tractable way we often rely on Markov modelling and variational inference. When dependence in time is considered, Gaussian processes are an interesting tractable tool. With the PhD of A. Constantin, we have investigated those in the context of a collaboration with INRAE and CNES in Toulouse, for the classification and reconstruction of irregularly sampled satellite image times series. The proposed approach is able to deal with irregular temporal sampling and missing data directly in the classification process. It is based on Gaussian processes and allows to perform jointly the classification of the pixel labels as well as the reconstruction of the pixel time series. The method complexity scales linearly with the number of pixels, making it amenable in large scale scenario. In a different context, we have developed hidden semi-Markov models for the analysis of eye movements, in particular with the PhD of B. Olivier in collaboration with A. Guérin-Dugué (GIPSA-lab) and B. Lemaire (Laboratoire de Psychologie et Neurocognition). New coupling methods for hidden semi-Markov models driven by several underlying state processes have been proposed.
The vast majority of deep learning architectures for medical image analysis are based on supervised models requiring the collection of large datasets of annotated examples. Building such annotated datasets, which requires skilled medical experts, is time consuming and hardly achievable, especially for some specific tasks, including the detection of small and subtle lesions that are sometimes impossible to visually detect and thus manually outline. This critical aspect significantly impairs performances of supervised models and hampers their deployment in clinical neuroimaging applications, especially for brain pathologies that require the detection of small size lesions (e.g. multiple sclerosis, microbleeds) or subtle structural or morphological changes (e.g. Parkinson's disease).
We have developed unsupervised anomaly detection methods based on generalized Student mixture models and deep statistical unsupervised learning model for the detection of early forms of Parkinson's disease.
We have also compared parametric mixture approaches to non parametric machine learning techniques for change detection in the context of time series analysis of glycemic curves for diabetes.
Extracting information from raw data is a complex task, all the more so as this information is measured in a high dimensional space. Fortunately, this information usually lives in a subspace of smaller size. Identifying this subspace is crucial but difficult. One approach is to perform appropriate changes of representation that facilitate the identification and characterization of the desired subspace. Latent random variables are a key concept to encode in a structured way representations that are easier to handle and capture the essential features of the data.
Methods adapted to high dimensions include inverse regression methods, i.e. SIR, partial least squares (PLS), approaches based on mixtures of regressions with different variants, e.g. Gaussian locally linear mapping (GLLiM) and extensions, Mixtures of Experts, cluster weighted models, etc.
SIR-like methods are flexible in that they reduce the dimension in a way optimal for the subsequent regression task that can itself be carried out by any desired regression tool. In that sense these
methods are said to be non parametric or semi-parametric and they have a potential to provide robust procedures. We have also proposed a new approach, called Extreme-PLS, for dimension reduction in conditional extreme values settings where the goal is to best explain the extreme values of the response variable.
To account for uncertainty in a principled manner, we also considered Bayesian inversion techniques.
We investigated the use of learning approaches to handle Bayesian inverse problems in a computationally efficient way when the observations to be inverted present a moderately high number of dimensions and are in large number. We proposed tractable inverse regression approaches, based on GLLiM and normalizing flows. They have the advantage to produce full probability distributions as approximations of the target posterior distributions. These distributions have several interesting features. They provide confidence indices on the predictions and can be combined with importance sampling or approximate Bayesian computation (ABC) schemes for a better exploration when multiple equivalent solutions exist. They generalise easily to variants that can handle non Gaussian data, dependent or missing observations. The relevance of the proposed approach has been illustrated on synthetic examples and on two real data applications, in the context of planetary remote sensing and neuroimaging.
In addition, we addressed the issue of model selection for some of the GLLiM models, i.e. Mixture of experts (MoE) models and contributed to a number of theoretical results.
Most SBI methods scale poorly when the number of observations is too large, which makes them unsuitable for
modern data, which are often acquired in real time, in an incremental nature, and are often available in large volume.
Computation of inferential quantities in an incremental manner may be forcibly imposed by the nature of data acquisition (e.g. streaming and sequential data) but may also be seen as a solution to handle larger data volumes in a more resource friendly way, with respect to memory, energy, and time consumption.
To produce feasible and practical online algorithms for streaming data and complex models, we have investigated the family of stochastic approximation (SA) algorithms combined with
the class of majorization-minimization (MM) and expectation-maximization (EM) algorithms for a certain class of models, e.g., exponential family distributions and their mixtures.
Bayesian methods have become the center of attraction to model the underlying uncertainty of statistical models. Bayesian models and methods are already used in all of our other axes, whenever the Bayesian choice provides interesting features, e.g. for model selection, dependence modeling (copulas), inverse problems, etc.
This axis emphasizes more specifically our theoretical and methodological research in Bayesian learning. In particular, we will focus on techniques referred to as Bayesian nonparametrics (BNP).
We have proposed Bayesian nonparametric priors for hidden Markov random fields, first for continuous, Gaussian observations with an illustration in image segmentation. Second, for discrete observed data typically issued from counts, e.g. Poisson distributed observations with an illustration on risk mapping model. The inference was done by Variational Bayesian Expectation Maximization (VBEM).
A common way to assess a Bayesian procedure is to study the asymptotic behavior of posterior distributions, that is their ability to estimate a true distribution when the number of observations grows. Mixture models have attracted a lot of attention in the last decade due to some negative results regarding the number of clusters. More specifically, it was shown that Bayesian nonparametric mixture models are inconsistent for some choices of priors. We proposed ways to compute the prior distribution of the number of clusters. This is a notoriously difficult task, and we proposed approximations in order to enable such computations for real-world applications. We studied and justified BNP models based on their asymptotic properties. We showed that mixture models based on many different BNP processes are inconsistent in the number of clusters and discuss possible solutions. Notably, we showed that a post-processing algorithm introduced for the simplest process (Dirichlet process) extends to more general models and provides a consistent method to estimate the number of components.
Approximate Bayesian computation (ABC) has become an essential part of the Bayesian toolbox for addressing problems in which the likelihood is prohibitively expensive or entirely unknown. A key ingredient in ABC is the choice of a discrepancy that describes how different the simulated and observed data are, often based on a set of summary statistics when the data cannot be compared directly. The choice of the appropriate discrepancies is an active research topic, which has mainly considered data discrepancies requiring samples of observations or distances between summary statistics. We have first investigated sample-based discrepancies and established new asymptotic results using so-called energy-based distances. We have then considered a summary-based approach and proposed a new ABC procedure that can be seen as an extension of the semi-automatic ABC framework to a functional summary statistics setting and can also be used as an alternative to sample-based approaches. The resulting ABC approach also exhibits amortization properties via the use of the GLLiM inverse regression model.
The connection between Bayesian neural networks and Gaussian processes gained a lot of attention in the last few years, with the flagship result that hidden units converge to a Gaussian process limit when the layers width tends to infinity. Underpinning this result is the fact that hidden units become independent in the infinite-width limit. Our aim is to shed some light on hidden units dependence properties in practical finite-width Bayesian neural networks. In addition to theoretical results, we assessed empirically the depth and width impacts on hidden units dependence properties. Hidden units are proven to follow a Gaussian process limit when the layer width tends to infinity. Recent work has suggested that finite Bayesian neural networks may outperform their infinite counterparts because they adapt their internal representations flexibly. To establish solid ground for future research on finite-width neural networks, our goal is to study the prior induced on hidden units. Our main result is an accurate description of hidden units tails which shows that unit priors become heavier-tailed going deeper, thanks to the introduced notion of generalized Weibull-tail. This finding sheds light on the behavior of hidden units of finite Bayesian neural networks.
Extreme events have a major impact on a wide variety of domains from environmental sciences (heat waves, flooding), reliability, to finance and insurance (financial crashes, reinsurance). While usual statistical approaches focus on the modeling of the bulk of the distribution, extreme-value analysis aims at building models adapted to distribution tails, where by nature, observations are rare. Extreme value analysis is a relatively recent domain in statistics focusing on distribution tails.
One of the most popular risk measures is the Value-at-Risk (VaR) introduced in the 1990’s. In statistical
terms, the VaR at level
A simple way to assess the (environmental, industrial or financial) risk is to compute a measure linked to
the value of the phenomena of interest (rainfall height, wind speed, river flow). Candidate measures include quantiles (which correspond to traditional Value at Risk or return levels),
expectiles, tail conditional moments, spectral risk measures, distorsion risk measures, etc.
We have mainly focused on the first two measures, quantiles and expectiles, and investigated estimation procedures for extensions of these measures.
The main drawback of quantiles is that they do not provide a coherent risk measure. Two distributions may have the same extreme quantile but very different tail behaviors. Moreover, standard estimators do not use the most extreme values of the
sample and consequently induce a loss of information. Our strategy was to adapt the definition of quantiles to take into account the whole distribution tail.
We have introduced new measures of extreme risk based on
A second challenge was to extend this concept to the regression framework where the variable of interest depends on a set of covariates. When the number of covariates is large, two research directions have been explored to overcome the curse of dimensionality: 1) we designed a dimension reduction method for the extreme-value context, 2) we also considered semi-parametric models to reduce the complexity of the fitted model.
Another challenge with expectiles is that their
sample versions do not benefit from a simple explicit form, making their analysis significantly harder
than that of quantiles and order statistics. This difficulty is compounded when one wishes to integrate
auxiliary information about the phenomenon of interest through a finite-dimensional covariate, in which
case the problem becomes the estimation of conditional expectiles.
We exploited the fact that the expectiles of a distribution are in fact the quantiles of another distribution
explicitly linked to the former one, in order to construct nonparametric kernel estimators of extreme conditional
expectiles. We analyze the asymptotic properties of our estimators in the context of conditional heavy tailed
distributions. The extension
to functional covariates was investigated.
Since quantiles and expectiles belong to the wider family of
We built a general theory for the estimation of extreme conditional expectiles in heteroscedastic regression models with heavy-tailed noise. Our approach is supported by general results of independent interest on residual-based extreme value estimators in heavy-tailed regression models, and is intended to cope with covariates having a large but fixed dimension. We demonstrated how our results could be applied to a wide class of important examples, among which linear models, single-index models as well as ARMA and GARCH time series models.
This is the topic of a more recent collaboration with E. Gobet from CMAP. Feedforward neural networks based on Rectified linear units (ReLU) cannot efficiently approximate quantile functions which are not bounded, especially in the case of heavy-tailed distributions. We have thus proposed a new parametrization for the generator of a Generative adversarial network (GAN) adapted to this framework, basing on extreme-value theory. We provided an analysis of the uniform error between the extreme quantile and its GAN approximation. It appears that the rate of convergence of the error is mainly driven by the second-order parameter of the data distribution. A similar investigation has been conducted to simulate fractional Brownian motion with ReLU neural networks.
As regards applications, several areas of image analysis can be
covered using the tools developed in the team. More specifically,
in collaboration with team perception, we address various issues
in computer vision involving Bayesian modelling and probabilistic
clustering techniques. Other applications in medical imaging are
natural. We work more specifically on MRI and functional MRI data, in collaboration
with the Grenoble Institute of Neuroscience (GIN). We also consider other
statistical 2D fields coming from other domains such as remote
sensing, in collaboration with the Institut de Planétologie et d'Astrophysique de
Grenoble (IPAG) and the Centre National d'Etudes Spatiales (CNES). In this context, we worked
on hyperspectral and/or multitemporal images. In the context of the "pole
de competivité" project I-VP, we worked of images of PC Boards.
A third domain of applications concerns biology and medicine. We considered the use of mixture models to identify biomakers. We also investigated statistical tools for the analysis of fluorescence signals in molecular biology. Applications in neurosciences are also considered. In the environmental domain, we considered the modelling of high-impact weather events and the use of hyperspectral data as a new tool for quantitative ecology.
The footprint of our research activities has not been assessed yet. Most of the team members have validated the “charte d'éco-responsabilité” written by a working group from Laboratoire Jean Kuntzmann, which should have practical implications in the near future.
A lot of our developments are motivated by and target applications in medicine and environmental sciences. As such they have a social impact with a better handling and treatment of patients, in particular with brain diseases or disorders. On the environmental side, our work has an impact on geoscience-related decision making with e.g. extreme events risk analysis, planetary science studies and tools to assess biodiversity markers. However, how to truly measure and report this impact in practice is another question we have not really addressed yet.
Joint work with: Senan Doyle from Pixyl.
The volume of a brain lesion (e.g. infarct or tumor) is a powerful indicator of patient prognosis and can be used to guide the therapeutic strategy. Lesional volume estimation is usually performed by segmentation with deep convolutional neural networks (CNN), currently the state-of-the-art approach. However, to date, few work has been done to equip volume segmentation tools with adequate quantitative predictive intervals, which can hinder their usefulness and acceptation in clinical practice. In this work, we propose TriadNet, a segmentation approach relying on a multi-head CNN architecture, which provides both the lesion volumes and the associated predictive intervals simultaneously, in less than a second. We demonstrate its superiority over other solutions on BraTS 2021, a large-scale MRI glioblastoma image database.
Joint work with: Senan Doyle from Pixyl.
Deep Learning models are easily disturbed by variations in the input images that were not observed during the training stage, resulting in unpredictable predictions. Detecting such Out-of-Distribution (OOD) images is particularly crucial in the context of medical image analysis, where the range of possible abnormalities is extremely wide. Recently, a new category of methods has emerged, based on the analysis of the intermediate features of a trained model. These methods can be divided into 2 groups: single-layer methods that consider the feature map obtained at a fixed, carefully chosen layer, and multi-layer methods that consider the ensemble of the feature maps generated by the model. While promising, a proper comparison of these algorithms is still lacking. In this work, we compared various feature-based OOD detection methods on a large spectra of OOD (20 types), representing approximately 7800 3D MRIs. Our experiments shed the light on two phenomenons. First, multi-layer methods consistently outperform single- layer approaches, which tend to have inconsistent behaviour depending on the type of anomaly. Second, the OOD detection performance highly depends on the architecture of the underlying neural network.
Joint work with: Senan Doyle, Pauline Roca from Pixyl.
The burden of liver tumors is important, ranking as the fourth leading cause of cancer mortality. In case of hepatocellular carcinoma (HCC), the delineation of liver and tumor on contrast-enhanced magnetic resonance imaging (CE-MRI) is performed to guide the treatment strategy. As this task is time-consuming, needs high expertise and could be subject to inter-observer variability there is a strong need for automatic tools. However, challenges arise from the lack of available training data, as well as the high variability in terms of image resolution and MRI sequence. In this work we propose to compare two different pipelines based on anisotropic models to obtain the segmentation of the liver and tumors. The first pipeline corresponds to a baseline multi-class model that performs the simultaneous segmentation of the liver and tumor classes. In the second approach, we train two distinct binary models, one segmenting the liver only and the other the tumors. Our results show that both pipelines exhibit different strengths and weaknesses. Moreover we propose an uncertainty quantification strategy allowing the identification of potential false positive tumor lesions. Both solutions were submitted to the MICCAI 2023 Atlas challenge regarding liver and tumor segmentation.
Joint work with: Senan Doyle, Alan Tucholka from Pixyl.
Deep Learning (DL) models are presently the gold standard for medical image segmentation. However, their performance may drastically drop in the presence of characteristics in test images not present in the training set. The automatic detection of these Out-Of-Distribution (OOD) inputs is the key to prevent the silent failure of DL models, especially when the visual inspection of the input is not systematically carried out. For MRI segmentation, a wide range of covariables can perturbate a DL model : noise, artifacts or MR sequence parameters. Deterministic Uncertainty Methods (DUM) are novel and promising techniques for OOD detection. They propose to analyze the intermediate activations of a trained segmentation DL model to detect OOD inputs. In a previous study, we demonstrated that DUM achieved high OOD detection performance on a task of Multiple Sclerosis lesions segmentation in T2-weighted FLAIR MRI. To evaluate the generalization capability of this technique, we propose to evaluate DUM in the context of automatic subcortical structures segmentation. We focus our results on the hippocampus and thalamus structures segmentation from T1-weighted MR brain scans of healthy subjects.
Joint work with: Michel Dojat from Grenoble Institute of Neurosciences, Carole Lartizien, Nicolas Pinon, Robin Trombetta from Creatis.
Neural network-based anomaly detection remains challenging in clinical applications with little or no supervised information and subtle anomalies such as hardly visible brain lesions. Among unsupervised methods, patch-based auto-encoders with their efficient representation power provided by their latent space, have shown good results for visible lesion detection. However, the commonly used reconstruction error criterion may limit their performance when facing less obvious lesions. In this work, we design two alternative detection criteria. They are derived from multivariate analysis and can more directly capture information from latent space representations.
Their performance compares favorably with two additional supervised learning methods, on a difficult de novo Parkinson Disease (PD) classification task.
Joint work with: Grégoire Vincent, IRD, AMAP, Montpellier, France.
Covering just 7% of the Earth’s land surface, tropical forests play a disproportionate role in the biosphere: they store about 25% of the terrestrial carbon and contribute to over a third of the global terrestrial productivity. They also recycle about a third of the precipitations through evapotranspiration and thus contribute to generate and maintain a humid climate regionally, with positive effects also extending well beyond the tropics. However, the seasonal variability in fluxes between tropical rainforests and atmosphere is still poorly understood. Better understanding the processes underlying flux seasonality in tropical forests is thus critical to improve our predictive ability on global biogeochemical cycles. Leaf area, one key variable controlling water efflux and carbon influx, is poorly characterized. To monitor evolutions of biomass, leaf area density (LAD) or gas exchange, aerial and terrestrial laser scanner (LiDAR) measurements have been frequently used.
The principle is, for different LiDAR shoots assumed as independent, to measure the portions of beam lengths between successive hits. Possible censoring comes from beams not being intercepted within a given voxel. Current approaches aim at connecting LAD to the distribution of beam lengths through some statistical model. Such a simplified model does not currently take into account several effects that may bias LAD estimators or lessen their accuracies: heterogeneity and dependencies in the vegetation properties in different voxels, nature of hit material (wood vs. leaves), unknown leaf angles, underdetection of vegetal elements due to gradual loss of laser power (inducing censoring in data sets).
This collaboration, supported by Y. Bai's PhD work, aims at developping machine learning methods to address these issues. Semantic segmentation of hits on wood vs. leaves was addressed by neural networks, some extensions to PointNet++ were developped 38 to cope with local data sparsity and severe class imbalance. Current work is now focusing on assessing the robustness of estimators to deviations from different assumptions using simulated data sets.
The field of Tiny Machine Learning (TinyML) has gained significant attention due to its potential to enable intelligent applications on resource-constrained devices. The review 73 provides an in-depth analysis of the advancements in efficient neural networks and the deployment of deep learning models on ultra-low power microcontrollers (MCUs) for TinyML applications. It begins by introducing neural networks and discussing their architectures and resource requirements. It then explores MEMS-based applications on ultra-low power MCUs, highlighting their potential for enabling TinyML on resource-constrained devices. The core of the review centres on efficient neural networks for TinyML. It covers techniques such as model compression, quantization, and lowrank factorization, which optimize neural network architectures for minimal resource utilization on MCUs. The paper then delves into the deployment of deep learning models on ultra-low power MCUs, addressing challenges such as limited computational capabilities and memory resources. Techniques like model pruning, hardware acceleration, and algorithm-architecture co-design are discussed as strategies to enable efficient deployment. Lastly, the review provides an overview of current limitations in the field, including the trade-off between model complexity and resource constraints. Overall, this review paper presents a comprehensive analysis of efficient neural networks and deployment strategies for TinyML on ultra-low-power MCUs. It identifies future research directions for unlocking the full potential of TinyML applications on resource-constrained devices.
Joint work with: Michel Dojat from GIN, Univ. Grenoble Alpes
In recent accepted publication 18, we worked on the notion of graph comparisons. Node role explainability in complex networks is very difficult, yet is crucial in different application domains such as social science, neurosciences or computer science. Many efforts have been made on the quantification of hubs revealing particular nodes in a network using a given structural property. Yet, in several applications, when multiple instances of networks are available and several structural properties appear to be relevant, the identification of node roles remains largely unexplored. Inspired by the node automorphically equivalence relation, we define an equivalence relation on graph's nodes associated with any collection of nodal statistics (i.e. any functions on the node set). This allows us to define new graph global measures, the power coefficient and the orthogonality score to evaluate the collection parsimony and heterogeneity of a given nodal statistics collection. In addition, we introduce a new method based on structural patterns to compare graphs that have the same vertices set. This methods assigns a value to a node to determine its role distinctiveness in a graph family. Extensive numerical results of our method are conducted on both generative graph models and real data concerning human brain functional connectivity. The differences in nodal statistics are shown to be dependent on the underlying graph structure. Comparisons between generative models and real networks combining two different nodal statistics reveal the complexity of human brain functional connectivity with differences at both global and nodal levels. Using a group of 200 healthy controls connectivity networks, our method is able to compute high correspondence scores among the whole population, to detect homotopy, and finally to quantify differences between comatose patients and healthy controls.
Joint work with: Jonas Richiardi from CHUV, Lausanne, Pierre Lafaye de Micheaux from Université Montpellier, Jean-Francois Coeurjolly from LJK Univ. Grenoble Alpes.
Functional magnetic resonance imaging (fMRI) functional connectivity between brain regions is often computed using parcellations defined by functional or structural atlases. Typically, some kind of voxel averaging is performed to obtain a single temporal correlation estimate per region pair. However, several estimators can be defined for this task, with various assumptions and degrees of robustness to local noise, global noise, and region size. In this paper 11, we systematically present and study the properties of 9 different functional connectivity estimators taking into account the spatial structure of fMRI data, based on a simple fMRI data spatial model. These include 3 existing estimators and 6 novel estimators. We demonstrate the empirical properties of the estimators using synthetic, animal, and human data, in terms of graph structure, repeatability and reproducibility, discriminability, dependence on region size, as well as local and global noise robustness.
Joint work with: Alex Petersen, Brigham Young University, US and Wendy Meiring, University Santa Barbara California, US
A novel non-parametric estimator of the correlation between regions, or groups of arbitrarily dependent variables, is proposed in the presence of noise. The challenge resides in the fact that both noise and low intra-regional correlation lead to inconsistent inter-regional correlation estimation using classical approaches. While some existing methods handle one of these issues or the other, none tackle both at the same time. To address this problem, we propose a trade-off between two approaches: correlating regional averages, which is not robust to low average intra-regional correlation, and averaging pairwise inter-regional correlations, which is not robust to high noise. To that end, we project the data onto a space where the Euclidean distance can be used as a proxy for the sample correlation. We then leverage hierarchical clustering to gather together highly correlated variables within each region prior to averaging. We prove our estimator is consistent for an appropriate cut-off height of the dendogram. We also empirically show our approach surpasses popular estimators in terms of quality and provide illustrations on real-world datasets that further demonstrate its usefulness. 22
Joint work with: Irène Gannaz, Univ. Grenoble Alpes
In the general setting of long-memory multivariate time series, the long-memory characteristics are defined by two components. The long-memory parameters describe the autocorrelation of each time series. And the long-run covariance measures the coupling between time series, with general phase parameters. It is of interest to estimate the long-memory, long-run covariance and general phase parameters of time series generated by this wide class of models although they are not necessarily Gaussian nor stationary. This estimation is thus not directly possible using real wavelets decomposition or Fourier analysis. Our purpose in the paper 12 is to define an inference approach based on a representation using quasi-analytic wavelets. We first show that the covariance of the wavelet coefficients provides an adequate estimator of the covariance structure including the phase term. Consistent estimators based on a local Whittle approximation are then proposed. Simulations highlight a satisfactory behavior of the estimation on finite samples on linear time series and on multivariate fractional Brownian motions. An application on a real neuroscience dataset is presented, where long-memory and brain connectivity are inferred.
The main objective of 68 is to find a control of the modulus of continuity of the standard Brownian motion in the spirit of what appears in (Kurtz, 1978). By letting the modulus depend on the time horizon we are able to get a control uniform in time in the sense that is is valid for the whole trajectory from 0 to infinity. Moreover, a stability inequality for diffusion processes is then derived and applied to two simple frameworks.
Joint work with: Guillaume Becq from GIPSA-lab, Univ. Grenoble Alpes, Emmanuel Barbier from GIN, University Grenoble Alpes, Joanes Grandjean and collaborators from Department of Cognitive Neuroscience, Donders Institute for Brain, Cognition, and Behaviour, Radboud University Medical Centre, 6525 AJ, Nijmegen, the Netherlands.
Task-free functional connectivity in animal models provides an experimental framework to examine connectivity phenomena under controlled conditions and allows for comparisons with data modalities collected under invasive or terminal procedures. Currently, animal acquisitions are performed with varying protocols and analyses that hamper result comparison and integration. Here we introduce StandardRat, a consensus rat functional magnetic resonance imaging acquisition protocol tested across 20 centers. To develop this protocol with optimized acquisition and processing parameters, we initially aggregated 65 functional imaging datasets acquired from rats across 46 centers. We developed a reproducible pipeline for analyzing rat data acquired with diverse protocols and determined experimental and processing parameters associated with the robust detection of functional connectivity across centers. We show that the standardized protocol enhances biologically plausible functional connectivity patterns relative to previous acquisitions. The protocol and processing pipeline described here is openly shared with the neuroimaging community to promote interoperability and cooperation toward tackling the most important challenges in neuroscience 20.
Joint work with: Hien Nguyen, University of Queensland, Brisbane Australia, Gersende Fort, IMT Toulouse.
To extend the applicability of Majorization-Minimization algorithms (MM) in a stochastic optimization context, we propose to combine MM with Sample Average Approximation (SAA). So doing, we avoid the setting of step sizes that goes with stochastic approximation approaches while augmenting SAA with the possibility to consider smaller samples of increasing sizes. In addition SAA does not require to assume uniqueness of the solution or quasi-convexity of the majorizers.
Joint work with: Hien Nguyen, University of Queensland, Brisbane Australia.
We extend Bayesian Synthetic Likelihood (BSL) methods to non-Gaussian approximations of the likelihood function. In this setting, we introduce Mixture of Experts (MoEs), a class of neural network models, as surrogate likelihoods that exhibit desirable approximation theoretic properties. Moreover, MoEs can be estimated using Expectation–Maximization algorithm-based approaches, such as the Gaussian Locally Linear Mapping model estimators that we implement. Further, we provide theoretical evidence towards the ability of our procedure to estimate and approximate a wide range of likelihood functions. Through simulations, we demonstrate the superiority of our approach over existing BSL variants in terms of both posterior approximation accuracy and computational efficiency.
Joint work with: Carole Lartizien, Creatis Lyon.
Anomaly detection in medical imaging is a challenging task in contexts where abnormalities are not annotated. This problem can be addressed through unsupervised anomaly detection (UAD) methods, which identify features that do not match with a reference model of normal profiles. Artificial neural networks have been extensively used for UAD but they do not generally achieve an optimal trade-off between accuracy and computational demand. As an alternative, we investigate mixtures of probability distributions whose versatility has been widely recognized for a variety of data and tasks, while not requiring excessive design effort or tuning. Their expressivity makes them good candidates to account for complex multivariate reference models. Their much smaller number of parameters makes them more amenable to interpretation and efficient learning. However, standard estimation procedures, such as the Expectation-Maximization algorithm, do not scale well to large data volumes as they require high memory usage. To address this issue, we propose to incrementally compute inferential quantities. This online approach is illustrated on the challenging detection of subtle abnormalities in MR brain scans for the follow-up of newly diagnosed Parkinsonian patients. The identified structural abnormalities are consistent with the disease progression, as accounted by the Hoehn and Yahr scale.
Joint work with: Wilfried Thuiller, LECA - Laboratoire d'Ecologie Alpine.
We investigate modelling species distributions over space and time which is one of the major research topics in both ecology and conservation biology. Joint Species Distribution models (JSDMs) have recently been introduced as a tool to better model community data, by inferring a residual covariance matrix between species, after accounting for species' response to the environment. However, these models are computationally demanding, even when latent factors, a common tool for dimension reduction, are used. To address this issue, previous research proposed to use a Dirichlet process, a Bayesian nonparametric prior, to further reduce model dimension by clustering species in the residual covariance matrix. Here, we built on this approach to include a prior knowledge on the potential number of clusters, and instead used a Pitman-Yor process to address some critical limitations of the Dirichlet process. We therefore propose a framework that includes prior knowledge in the residual covariance matrix, providing a tool to analyze clusters of species that share the same residual associations with respect to other species. We applied our methodology to a case study of plant communities in a protected area of the French Alps (the Bauges Regional Park), and demonstrated that our extensions improve dimension reduction and reveal additional information from the residual covariance matrix, notably showing how the estimated clusters are compatible with plant traits, endorsing their importance in shaping communities. A book chapter describing latent factor models as a tool for dimension reduction in joint species distribution models is aslo available.
Joint work with: Hien Nguyen, University of Queensland, Brisbane Australia.
We study the large sample behaviors of approximate Bayesian computation (ABC) posterior measures in situations when the data generating process is dependent on non-identifiable parameters. In particular, we establish the concentration of posterior measures on sets of arbitrarily small size that contain the equivalence set of the data generative parameter, when the sample size tends to infinity. Our theory also makes weak assumptions regarding the measurement of discrepancy between the data set and simulations, and in particular, does not require the use of summary statistics and is applicable to a broad class of kernelized ABC algorithms. We provide useful illustrations and demonstrations of our theory in practice, and offer a comprehensive assessment of the nature in which our findings complement other results in the literature.
Joint work with: Guillaume Kon Kam King (INRAE).
Bayesian nonparametric mixture models are common for modeling complex data. While these models are well-suited for density estimation, their application for clustering has some limitations. Recent results proved posterior inconsistency of the number of clusters when the true number of clusters is finite for the Dirichlet process and Pitman–Yor process mixture models. In 62, we extend these results to additional Bayesian nonparametric priors such as Gibbs-type processes and finite-dimensional representations thereof. The latter include the Dirichlet multinomial process, the recently proposed Pitman–Yor, and normalized generalized gamma multinomial processes. We show that mixture models based on these processes are also inconsistent in the number of clusters and discuss possible solutions. Notably, we show that a post-processing algorithm introduced for the Dirichlet process can be extended to more general models and provides a consistent method to estimate the number of components.
Joint work with: A. Dutfoy (EDF R&D).
Diagnosing convergence of Markov chain Monte Carlo (MCMC) is crucial in Bayesian analysis.
Among the most popular methods, the potential scale reduction factor (commonly named
Joint work with: Hong-Phuong Dang, Clement Elvira, Cédric Herzet, Zacharie Naulet, Mariia Vladimirova.
The study of feature propagation at initialization in neural networks lies at the root of numerous initialization
designs. An assumption very commonly made in the field states that the pre-activations are Gaussian.
Although this convenient Gaussian hypothesis can be justified when the number of neurons per layer tends to infinity, it is challenged by both theoretical and experimental works for finite-width neural networks.
Our major contribution of this work is to construct a family of pairs of activation functions and initialization distributions that ensure
that the pre-activations remain Gaussian throughout the network's depth, even in narrow neural networks.
In the process, we discover a set of constraints that a neural network should fulfill to ensure Gaussian
pre-activations.
Additionally, we provide a critical review of the claims of the Edge of Chaos line of works and build an exact Edge of Chaos analysis.
We also propose a unified view on pre-activations propagation, encompassing the framework of several well-known
initialization procedures.
Finally, our work provides a principled framework for answering the much-debated question: is it desirable to initialize the training of
a neural network whose pre-activations are ensured to be Gaussian?
We also investigate the cold posterior effect through the lens of PAC-Bayes generalization bounds. We argue that in the non-asymptotic setting, when the number of training samples is (relatively) small, discussions of the cold posterior effect should take into account that approximate Bayesian inference does not readily provide guarantees of performance on out-of-sample data. Instead, out-of-sample error is better described through a generalization bound. In this context, we explore the connections of the ELBO objective from variational inference and the PAC-Bayes objectives. We note that, while the ELBO and PAC-Bayes objectives are similar, the latter objectives naturally contain a temperature parameter
15 summarizes some recent works and associated challenges in the field of Bayesian statistics that were presented during the Journées MAS 2020. The goal of the session was to give an overview of the many aspects of Bayesian statistics investigated by young researchers of the community.
Joint work with: Pierre Alliez, Inria Titane and Christophe Heinkele, Cerema Strasbourg.
We propose a new procedure, for Bayesian experimental design, that performs sequential design optimization by simultaneously providing accurate estimates of successive posterior distributions for parameter inference. The sequential design process is carried out via a contrastive estimation principle, using stochastic optimization and Sequential Monte Carlo (SMC) samplers to maximise the Expected Information Gain (EIG). As larger information gains are obtained for larger distances between successive posterior distributions, this EIG objective may worsen classical SMC performance. To handle this issue, tempering is proposed to have both a large information gain and an accurate SMC sampling, that we show is crucial for performance. This novel combination of stochastic optimization and tempered SMC allows to jointly handle design optimization and parameter inference. We provide a proof that the obtained optimal design estimators benefit from some consistency property. Numerical experiments confirm the potential of the approach, which outperforms other recent existing procedures.
Joint work with: Hien Duy Nguyen, University of Queensland, Brisbane, Australia.
A wide class of problems can be formulated as inverse problems where the goal is to find parameter values that best explain some observed measures. Typical constraints in practice are that relationships between parameters and observations are highly nonlinear, with high-dimensional observations and multi-dimensional correlated parameters. To handle these constraints, we consider probabilistic mixtures of locally linear models, which can be seen as particular instances of mixtures of experts (MoE). We have shown in previous studies that such models had a good approximation ability provided the number of experts was large enough. This contribution is to propose a general scheme to design a tractable Bayesian nonparametric (BNP) MoE model to avoid any commitment to an arbitrary number of experts. A tractable estimation algorithm is designed using a variational approximation and theoretical properties are derived on the predictive distribution and the number of components. Illustrations on simulated and real data show good results in terms of selection and computing time compared to more traditional model selection procedures.
Joint work with: Thomas Moreau and Alexandre Gramfort from Inria Saclay and Gilles Louppe from Université de Liège.
Inferring the parameters of a stochastic model based on experimental observations is central to the scientific method. A particularly challenging setting is when the model is strongly indeterminate, i.e. when distinct sets of parameters yield identical observations. This arises in many practical situations, such as when inferring the distance and power of a radio source (is the source close and weak or far and strong?) or when estimating the amplifier gain and underlying brain activity of an electrophysiological experiment. In a recent work, we have proposed the hierarchical neural posterior estimation (HNPE), a novel method for cracking such indeterminacy by exploiting additional information conveyed by an auxiliary set of observations sharing global parameters. This method extends recent developments in simulation-based inference (SBI) based on normalizing flows to Bayesian hierarchical models. We validated HNPE quantitatively on a motivating example amenable to analytical solutions and then applied it to invert a well known non-linear model from computational neuroscience, using both simulated and real EEG data.
Joint work with: Nicholas Tolley and Stephanie Jones from Brown University, Alexandre Gramfort from Inria Saclay
The Human Neocortical Neurosolver (HNN) is a framework whose foundation is a cortical column model with cell and circuit level detail designed to connect macroscale signals to meso/microcircuit level phenomena. We apply this model to study the cellular and circuit mechanisms of beta generation using local field potential (LFP) recordings from the non-human primate (NHP) motor cortex. To characterize beta producing mechanisms, we employ simulation based inference (SBI) in the HNN modeling tool. This framework leverages machine learning techniques and neural density estimators to characterize the relationship between a large space of model parameters and simulation output. In this setting, Bayesian inference can be applied to models with intractable likelihood functions (Gonçalves 2020, Papamakarios 2021). The main goal of this project is to provide a set of guidelines for scientists that wish to apply simulation-based inference to their neuroscience studies with a large-scale simulator such as HNN. This involves developing new methods for extracting summary features, checking the quality of the posterior approximation, etc. This work is mostly carried out by the Ph.D. student Nicholas Tolley from Brown University.
Joint work with: M. Allouche and E. Gobet (CMAP, Ecole Poytechnique).
In 14, we propose new parametrizations for neural networks in order to estimate extreme quantiles in both non-conditional and conditional heavy-tailed settings. All proposed neural network estimators feature a bias correction based on an extension of the usual second-order condition to an arbitrary order. The convergence rate of the uniform error between extreme log-quantiles and their neural network approximation is established. The finite sample performances of the non-conditional neural network estimator are compared to other bias-reduced extreme-value competitors on simulated data. It is shown that our method outperforms them in difficult heavy-tailed situations where other estimators almost all fail. Finally, the conditional neural network estimators are implemented to investigate the behaviour of extreme rainfalls as functions of their geographical location in the southern part of France.
Joint work with: M. Allouche (CMAP, Ecole Polytechnique).
One of the most popular risk measures is the Value-at-Risk (VaR) introduced in the 1990's.
In statistical terms,
the VaR at level
Joint work with: H. Nguyen (University of Queensland, Brisbane, Australia), T. Opitz (INRAe Avignon) and A. Usseglio-Carleve (Univ. Avignon).
Expectiles form a family of risk measures that have recently gained interest over the more common value-at-risk or return levels, primarily due to their capability to be determined by the probabilities of tail values and magnitudes of realisations at once. However, a prevalent and ongoing challenge of expectile inference is the problem of uncertainty quantification, which is especially critical in sensitive applications, such as in medical, environmental or engineering tasks. In 16, we address this issue by developing a novel distribution, termed the multivariate expectilebased distribution (MED), that possesses an expectile as a closed-form parameter. Desirable properties of the distribution, such as log-concavity, make it an excellent fitting distribution in multivariate applications. Maximum likelihood estimation and Bayesian inference algorithms are described. Simulated examples and applications to expectile and mode estimation illustrate the usefulness of the MED for uncertainty quantification.
Analysis of variance (ANOVA) is commonly employed to assess differences in the means of independent samples. However, it is unsuitable for evaluating differences in tail behaviour, especially when means do not exist or empirical estimation of moments is inconsistent due to heavy-tailed distributions. Here, we propose an ANOVA-like decomposition to analyse tail variability, allowing for flexible representation of heavy tails through a set of user-defined extreme quantiles, possibly located outside the range of observations. Building on the assumption of regular variation, we introduce a test for significant tail differences among multiple independent samples and derive its asymptotic distribution. We investigate the theoretical behaviour of the test statistics for the case of two samples, each following a Pareto distribution, and explore strategies for setting hyperparameters in the test procedure. To demonstrate the finite-sample performance, we conduct simulations that highlight generally reliable test behaviour for a wide range of situations. The test is applied to identify clusters of financial stock indices with similar extreme log-returns and to detect temporal changes in daily precipitation extremes at rain gauges in Germany. The results are submitted for publication 72.
Joint work with: G. Enjolras (CERAG).
In the context of the PhD thesis of Meryem Bousebata, we proposed a new approach, called Extreme-PLS (EPLS), for dimension reduction in regression and adapted to distribution tails. The objective is to find linear combinations of predictors that best explain the extreme values of the response variable in a non-linear inverse regression model. The asymptotic normality of the EPLS estimator is established in the single-index framework and under mild assumptions. The performance of the method is assessed on simulated data. A statistical analysis of French farm income data, considering extreme cereal yields, is provided as an illustration 17.
Further, a novel interpretation of EPLS directions as maximum likelihood estimators is introduced in 66, utilizing the von Mises-Fisher distribution applied to hyperballs. The dimension reduction process is enhanced through the Bayesian paradigm, enabling the incorporation of prior information into the projection direction estimation. The maximum a posteriori estimator is derived in two specific cases, elucidating it as a regularization or shrinkage of the EPLS estimator. We also establish its asymptotic behavior as the sample size approaches infinity. A simulation data study is conducted in order to assess the practical utility of our proposed method. This clearly demonstrates its effectiveness even in moderate data problems within high-dimensional settings. Furthermore, we provide an illustrative example of the method's applicability using French farm income data, highlighting its efficacy in real-world scenarios. The results are submitted for publication.
Joint work with: A. Dutfoy (EDF R&D).
Combining extreme value theory with Bayesian methods offers several advantages, such as a quantification of uncertainty on parameter estimation or the ability to study irregular models that cannot be handled by frequentist statistics. However, it comes with many options that are left to the user concerning model building, computational algorithms, and even inference itself. Among them, the parameterization of the model induces a geometry that can alter the efficiency of computational algorithms, in addition to making calculations involved. In 25, we focus on the Poisson process characterization of extremes and outline two key benefits of an orthogonal parameterization addressing both issues. First, several diagnostics show that Markov chain Monte Carlo convergence is improved compared with the original parameterization. Second, orthogonalization also helps deriving Jeffreys and penalized complexity priors, and establishing posterior propriety. The analysis is supported by simulations, and our framework is then applied to extreme level estimation on river flow data.
MSTGA and AIGM INRAE (French National Institute for Agricultural Research) networks: F. Forbes and J.B Durand are members of the INRAE network called AIGM (ex MSTGA) network since 2006, website, on Algorithmic issues for Inference in Graphical Models.
It is funded by INRAE MIA and RNSC/ISC Paris. This network gathers researchers from different disciplines.
Statify co-organized and hosted 2 of the network meetings in 2008 and 2015 in Grenoble.