Mathematical optimization is the key to solving many problems in science, based on the observation that physical systems obey a general principle of least action. While some problems can be solved analytically, many more can only be solved via numerical algorithms. Research in this domain has been steadily ongoing for decades.
In addition, many fields such as medecine continue to benefit from considerable improvements in data acquisition technology, based on sophisticated tools from optics and physics (e.g., new laser sources in microscopy, multi-coil systems in MRI, novel X-ray schemes in mammography, etc). This evolution is expected to yield significant improvements in terms of data resolution, making the interpretation and analysis of the results easier and more accurate for the practitioner. The large amounts of generated data must be analyzed by sophisticated optimization tools so that, in recent years, optimization has become a main driving force fostering significant advances in data processing. Previously hidden or hard to extract information can be pried from massive datasets by modern recovery and data mining methods. At the same time, automated decision and computer-aided diagnoses are made possible through optimal learning approaches.
However, major bottlenecks still exist. Recent advances in instrumentation techniques come with the need to minimize functions involving an increasingly large number of variables (at least one billion variables in 3D digital tomography modality), and with increasingly complex mathematical structure. The computational load for solving these problems may be too high for even state-of-the-art algorithms. New algorithms must be designed with computational scalability, robustness, and versatility in mind. In particular, the following severe requirements must be fulfilled: (i) ability to tackle high-dimensional problems in a reasonable computation time; (ii) low-requirements in terms of memory usage; (iii) robustness to incomplete or unreliable information; (iv) adaptivity to statistically varying environments; (v) resilience to latency issues arising in architectures involving multiple computing units.
These difficulties are compounded in the medical and biomedical areas. In these contexts, datasets are not easily available due to patient confidentiality and/or instrument limitations. Moreover, high-level expertise is necessary to interpret the data which can be of very high dimension. Finally, the developed analysis methods must be reliable and interpretable by the medical/biomedical community.
The objective of the OPIS project is to design advanced optimization methods for the analysis and processing of large and complex data. Applications to inverse problems and machine learning tasks in biomedical imaging are major outcomes of this research project. We seek optimization methods able to tackle data with both a large sample-size (“big
More specifically, three main research avenues are explored, namely:
In summary, the specificity of OPIS is to address problems involving high-dimensional biomedical data, e.g. 3D CT, PET, ultrasound images, and MRI, by making use of advanced computational optimization methods.
Variational problems requiring the estimation of a huge number of variables have now to be tackled, especially in the field of 3D reconstruction/restoration (e.g.
Graphs and hypergraphs are rich data structures for capturing complex, possibly irregular, dependencies in multidimensional data. Coupled with Markov models, they constitute the backbones of many techniques used in computer vision. Optimization is omnipresent in graph processing. Firstly, it allows the structure of the underlying graph to be inferred from the observed data, when the former is hidden. Second, it permits to develop graphical models based on the prior definition of a meaningful cost function. This leads to powerful nonlinear estimates of variables corresponding to unknown weights on the vertices and/or the edges of the graph. Tasks such as partitioning the graph into subgraphs corresponding to different clusters (e.g., communities in social networks) or graph matching, can effectively be performed within this framework. Finally, graphs by themselves offer flexible structures for formulating and solving optimization problems in an efficient distributed manner. On all these topics, our group has acquired a long-term expertise that we plan to further strengthen. In terms of applications, novel graph mining methods are proposed for gene regulatory and brain network analysis. For example, we plan to develop sophisticated methods for better understanding the gene regulatory network of various microscopic fungi, in order to improve the efficiency of the production of bio-fuels (collaboration with IFP Energies Nouvelles).
Nowadays, deep learning techniques efficiently solve supervised tasks in classification or regression by utilizing large amounts of labeled data and the powerful high level features that they learn by using the input data. Their good performance has caught the attention of the optimization community since currently these methods offer virtually no guarantee of convergence, stability or generalization. Deep neural networks are optimized through a computationally intensive engineering process via methods based on stochastic gradient descent. These methods are slow and they may not lead to relevant local minima. Thus, more efforts must be dedicated in order to improve the training of deep neural networks by proposing better optimization algorithms applicable to large-scale datasets. Beyond optimization, incorporating some structure in deep neural networks permits more advanced regularization than the current methods. This should reduce their complexity, as well as allow us to derive some bounds regarding generalization. For example, many signal processing models (e.g. those based on multiscale decompositions) exhibit some strong correspondence with deep learning architectures, yet they do not require as many parameters. One can thus think of introducing some supervision into these models in order to improve their performance on standard benchmarks. A better mathematical understanding of these methods permits to improve them, but also to propose some new models and representations for high-dimensional data. This is particularly interesting in settings such as the diagnosis or prevention of diseases from medical images, because they correspond to critical applications where the made decision is crucial and needs to be interpretable. One of the main applications of this work is to propose robust models for the prediction of the outcome of cancer immunotherapy treatments from multiple and complementary sources of information: images, gene expression data, patient profile, etc (collaboration with Institut Gustave Roussy).
Participants:
One of the main challenges faced today by companies like Thales or Schneider Electric designing advanced industrial systems, is to ensure the safety of new generations of products based on the use of neural networks. Since 2013, neural networks have been shown to be sensitive to adversarial perturbations. Deep neural networks can thus be fooled, in an intentional (security issue) or in undeliberate manner (safety issue), which raises a major robustness concern for safety-critical systems which need to be certified by an independent certification authority prior to any entry into production/operation. Tech- niques based on mathematical proofs of robustness are generally preferred by industrial safety experts since they enable a safe-by-design approach that is more efficient than a robustness verification activity done a posteriori with a necessarily bounded effort. Among the possible mathematical approaches, we focus on those relying upon the analysis of the Lipschitz properties of neural networks 26. Such properties play a fundamental role in the understanding of the internal mechanisms governing these complex nonlinear systems. Besides, they make few assumptions on the type of non-linearities used and are thus valid for a wide range of networks.
Participants:
The robustness of image segmentation has been an important research topic in the past few years as segmentation models have reached production-level accuracy. However, like classification models, segmentation models can be vulnerable to adversarial perturbations, which hinders their use in critical-decision systems like healthcare or autonomous driving. Recently, randomized smoothing has been proposed to certify segmentation predictions by adding Gaussian noise to the input to obtain theoretical guarantees. However, this method exhibits a trade-off between the amount of added noise and the level of certification achieved. In this topic, we address the problem of certifying segmentation prediction using a combination of randomized smoothing and diffusion models. We challenge our methods in both general computer vision and medical imaging dataset.
The response of patients with cancer to immunotherapy can vary considerably, innovative predictors of response to treatment are needed to improve treatment outcomes. We aimed to develop and independently validate radiomics-based biomarkers of tumour-infiltrating cells in patients included in trials of the two most common, recent immunotherapy treatments: anti-programmed cell death protein (PD)-1 or anti-programmed cell death ligand 1 (PD-L1) monotherapy. We also aimed to evaluate the association between the biomarker, and tumour immune phenotype and clinical outcomes of these patients.
However, sometimes, not only do patient respond poorly, but immunotherapy seems to make things worse. Some patients see they tumoral load increase significantly faster after immunotherapy is started. These patients are called “hyper-progressors”. One of our project has been to clearly define and detect this class of patients. This is so novel that the very notion of hyperprogressive patient was still controversial when our work was published, but is accepted now.
In this axis we investigate powerful representations for radiological and pathological data that could be associated with interesting and important clinical questions.
Participants:
The core focus of our research revolves around scrutinizing cancer through the utilization of digital slide images resulting from biopsies or surgical resection. Our exploration stands at the intersection of cutting-edge AI technology and its invaluable potential for advancing precision medicine, and more particularly liver cancer (hepatocellular carcinoma and intrahepatic cholangiocarcinoma) diagnosis and treatment. The challenges to be solved are related to the limited number of available annotated data and the large-size of whole slide images (WSIs).
Participants:
In March 2020, the PRISM institute of Gustave-Roussy was launched. The aim of this project, funded for 5 years, is to develop targeted treatments that are more likely to work on specific patients.
The mission of this “second-generation” precision medicine centre will be to model cancer on an individual scale by creating numerical avatars of tumours. The aim is to identify patients with the most aggressive cancers very early in the disease, without waiting for relapses, in order to offer them the most appropriate treatment from the start of treatment, using the huge volume of clinical, biological and molecular data and their analysis by artificial intelligence. PRISM will conduct large-scale clinical studies and develop molecular analysis technologies and data analysis methods.
Coordinated by Professor Fabrice André, Research Director of Gustave Roussy, Inserm Research Director and Professor at Paris-Saclay University, Prism aims to revolutionize the understanding of the molecular and biological mechanisms of cancer development and progression through artificial intelligence. Based on increasingly rich data of various types (clinical, genomic, microbiological, imaging, etc.), learning algorithms make it possible to develop finer diagnostic and prognostic tools, and thus to propose therapies that are personalised according to the characteristics of the individual.
Funded by the French National Research Agency, PRISM received the IHU label in 2018, followed by the National Center for Precision Medicine label.
Participants: Raoul Salle de Chou,
Coronary arteries feed the heart muscles with nutrients and oxygen. As such, they are some of the most critical blood vessel in the entire body. Coronary disease is difficult to diagnose especially when it affects the smaller branches of these vessels, because direct imaging of these vessels is infeasible with current medical imaging technology. Instead, blood perfusion through the myocardium can be imaged and is correlated with both arterial and myocardium disease. However, perfusion imaging is challenging, invasive and expensive because it relies on radioactive tracers.
A previous model was developed for myocardial perfusion simulation for coronary artery disease in [link] to replace the actual exam with a numerical twin and conduct it via simulations. The model aims at reproducing
For this a linear Darcy model is used to simulate blood flow through the porous medium. However, in addition to a high computational cost, the simulation fails to accurately reproduce some diseases, particularly those that affect medium-size coronary branches.
The main goal of this project is to combine Machine Learning (ML) methods with physical simulations, in order to improve the current simulation pipeline. ML algorithms are used to learn from PET imaging exams while being guided by simulation hypothesis, thereby diminishing the dependency on patient data. To achieve this, each part of the simulation is to be replaced by an ML model. Following successful replication of simulation outcomes, the model will undergoes refinement using patient data.
A finite volume physics informed graph neural network was developed to solve the Darcy equations on irregular shapes serving as a substitute for the myocardium component in perfusion simulation. Preliminary results indicate superior performance of this model in terms of accuracy and generalization compared to classical ML approaches. In 54, we introduced a novel optimization framework for the generation of the synthetic small vessels utilizing the constructed constrained optimization (CCO) method. Our new approach simulated similar 2D vascular trees as the original CCO method in terms of morphometry while producing better optimal solutions at lower computational cost. This new approach is expected to be more readily reproducible using ML methods compared to the original CCO technique.
On this topic, an open-source CCO implementation was published on IPOL (Image Processing OnLine) 22. This has the potential to disseminate this useful method for vascular tree generation to a wider audience.
Additionally, work has been conducted towards the determination of the myocardium perfusion regions. Determining these regions, and their associated vessel is a crucial step in current simulation pipeline. However, the current calculation method is inaccurate and highly sensitive to the resolution of segmented vessels. A more robust and accurate model, employing graph neural networks (GNNs), has been developed for the determination of these regions.
Participants:
Peak-signal retrieval is a core challenge in separative analytical chemistry (AC). For instance, in chromatography, spectrometry, spectroscopy, peak localization, amplitude, width or area provide useful chemical quantitative information. We investigated the problem of joint trend removal and blind deconvolution of sparse peaklike signals. The trend accounts for mere offsets to slowly-varying amplitude shifts (seasonality, calibration distortion, sensor decline), making its automated removal challenging. We proposed the method PENDANTSS 30 that handles smooth trend removal by exploiting their low-pass property and simplifies the problem into a blind deconvolution problem. The proposed approach implements a convergent and efficient tool, with a novel Trust-Region block alternating variable metric forward-backward approach. Simulation results confirm that PENDANTSS outperforms comparable methods on typical sparse analytical signals. Collaboration with Dr. L. Duval, Research Engineer at IFP Energies Nouvelles, France is on-going in this applicative context.
Participants: Julien Adjenbaum,
Through an ongoing collaboration with physicists from XLIM laboratory (CNRS, Limoges, France), we propose advanced mathematical and computational solutions for multiphoton microscopy (MPM) 3D image restoration. This modality enjoys many benefits such as a decrease in phototoxicity and increase in penetration depth. However, blur and noise issues can be more severe than with standard confocal images. Our objective is to drastically improve the quality of the generated images and their resolution by improving the characterization of the PSF of the system and compensating its effect. We consider the application of the improved MPM imaging tool to the microscopic analysis of muscle ultrastructure and composition, with the aim to help diagnosing muscle disorders including rare and orphan muscle pathologies 6, and to visualize bacteria and viral structures 13.
Participants:
The objective of this collaboration with researchers from GE Healthcare, is to develop high quality reconstruction methodologies, for computed tomography (CT) for interventional surgery. Discretizing and implementing tomographic forward and backward operations is a crucial step in the design of model-based iterative reconstruction algorithms in interventional CT, that we have investigated in the CIFRE PhD thesis of Marion Savanier. The mathematical constraint of symmetry on the projector/backprojector pair prevents linear interpolation, which is a standard method in analytical reconstruction, to be used. Consequently, it often happens that these operators are approximated numerically, so that the adjoint property is no longer fulfilled. In 8, we investigate fixed point algorithms stability properties when such an adjoint mismatch arises. In 27, we focus on reconstructing regions of interest from a limited number of CT measurements. We proposed to handle few-view truncated data thanks to a robust non-convex data fidelity term combined with a sparsity inducing regularization function. We then apply the deep unfolding paradigm to unroll a proximal algorithm, embedded in an iterative reweighted scheme, allowing the learning of key parameters in a supervised manner, with a reduced inference time.
Participants:
Positron emission tomography (PET) is a quantitative functional imaging modality used to track the fate and/or dynamics of a radiotracerpreviously injected into a patient. This technique is particularly used in oncology for diagnosis and therapeutic monitoring, in the study ofneurodegenerative diseases, and in pharmacology. In dynamic PET, the temporal evolution of the spatial distribution of the radiotracerduring the examination is taken into account for the estimation of physiological parameters allowing for a fine characterization of themolecular mechanisms at play (receptor concentration, absorption, dissociation constants, binding potential, etc.). In the PhD thesis of
Geometric Graph Neural Networks for molecular and chemical systems
Participants:
Graph Neural Networks (GNNs) currently constitute state-of-the-art models for solving prediction tasks on graphs. Through the flexible formulation of the message passing mechanism, GNNs can learn informative latent representations of graph entities at different resolution levels (e.g., node-, edge-, graph-level). In many practical applications in molecular and chemical systems, the nodes of the graph have associated geometric attributes (e.g., coordinates, velocities) related to their position in the 3D space. In this context, geometric graphs represent the interaction of atoms in the 3D space, encapsulating a range of physical symmetries such as rotations and translations. Existing GNN models often overlook this aspect, rendering them ill-suited for prediction tasks on geometric graphs. Recently, Geometric GNN architectures tailored to respect physical symmetries have emerged as flexible models of atomic systems. Through an ongoing collaboration with Mila - Quebec AI Institute, Université de Montréal, McGill University, and Intel Labs, we study geometric GNN models, focusing both on design principles as well as on practical applications in materials modeling (e.g., property prediction and molecule generation) 39.
Participants:
The discovery of novel gene regulatory processes improves the understanding of cell phenotypic responses to external stimuli for many biological applications, such as medicine, environment or biotechnologies. To this purpose, transcriptomic data are generated and analyzed from DNA microarrays or more recently RNAseq experiments. They consist in genetic expression level sequences obtained for all genes of a studied organism placed in different living conditions. From these data, gene regulation mechanisms can be recovered by revealing topological links encoded in graphs. In regulatory graphs, nodes correspond to genes. A link between two nodes is identified if a regulation relationship exists between the two corresponding genes. In our work, we propose to address this network inference problem with recently developed techniques pertaining to graph optimization. Given all the pairwise gene regulation information available, we propose to determine the presence of edges in the considered GRN by adopting an energy optimization formulation integrating additional constraints. Either biological (information about gene interactions) or structural (information about node connectivity) a priori are considered to restrict the space of possible solutions. Different priors lead to different properties of the global cost function, for which various optimization strategies, either discrete and continuous, can be applied.
Participants:
Numerous real-world prediction problems involve spatiotemporal data. For example, consider sensors scattered across diverse geographical regions measuring environmental conditions (e.g., temperature, pollution) or functional magnetic resonance imaging (fMRI) data capturing brain activity. Both scenarios generate data inherently rich in spatiotemporal structure, benefiting from the relational inductive bias of graph-based modeling. In an ongoing collaboration with the University of Delaware, Télécom Paris, and La Rochelle Université, we have introduced a methodology that leverages graph-based modeling, enabling time series imputation with GNNs 36. Major challenges here concern inducing temporal and relational smoothness assumptions into the model as well as inferring the (often unknown) graph structure. Furthermore, an intriguing aspect involves enhancing spatiotemporal graph models with causal properties to capture causal influence effects among entities.
Participants:
Through the Associate International Inria Team COMPASS, led by
The discovery of drug-target interactions is also explored by
Participants:
Studying the complex inter-dependencies in climate processes is a critical challenge. We use climate models and analytical data to gain insight through observational causal discovery 31. We revisit Granger causality under a graphical perspective of state-space models. We investigate expectation-maximisation algorithms 3768 for estimating matrix parameters in the state equation of a linear-Gaussian state-space model under sparse priors, emphasizing both causal and correlation relationships among time series samples. We evaluate our methods on climate-related problems, such as linking ENSO and the North Atlantic Oscillation.
Participants:
Diagnosis and staging of lung diseases is a major challenge for both patient care and approval of new treatments. Among imaging techniques, computed tomography (CT) is the gold standard for in vivo morphological assessment of lung anatomy currently offering the highest spatial resolution in lung diseases. Although CT is widely used its optimal use in clinical practice and as an endpoint in clinical trials remains controversial. Our goal in the PhD thesis of
Participants:
Small bowel obstruction (SBO) is a common nontraumatic surgical emergency. All guidelines recommend computed tomography (CT) as the first-line imaging technique for patients with suspected mechanical SBO with a four-fold goal: (i) to confirm or refute the diagnosis of SBO and, when SBO is present, (ii) to locate the site of the obstruction, that is, the transition zone (iii) to identify the cause, and (iv) to look for complications such as strangulation or perforation. Identifying SBO and differentiating its causes (e.g., open-loop and closed-loop mechanisms) is time-consuming and subject to inter-observer and intra-observer variability.
The aim of this collaborative project between Inria Saclay OPIS, Hôpital St Joseph, and LIB, Sorbonne University, is to investigate AI approaches for a guided SBO diagnosis from 3D CT scans.
Participant:
Cardio-vascular diseases continue to be the leading cause of mortality in the world. Understanding these diseases is a current, challenging and essential research project. The leading cause of heart malfunction are stenoses causing ischemia in the coronary vessels. Current CT and MRI technology can assess coronary diseases but are typically invasive, requiring risky catheterization and renal-toxic contrast agents injection. In collaboration with the REO team headed by Irène Vignon-Clementel, and Heartflow, a US based company, we have in the past contributed to Heartflow's major product, that replaces these physical exams with image-based exams only, limiting the use of contrast agents and in the cases that do not require a stent insertion, eliminating catheterisation. Heartflow is current the market leader in non-invasive coronary exams and the owner of most of the relevant IP in this domain.
Unfortunately, current imaging technology is unable to assess coronary disease along the full length of coronary vessels. CT is limited to a resolution of about 1mm, whereas coronary vessels can be much smaller, down to about 10 micrometers in diameter. To assess blood vessel health down to the smallest sizes, blood perfusion imaging technique throughout the heart muscle must be used instead. Perfusion imaging with PET or a Gamma camera, the current gold standard, is an invasive technology requiring the use of radioactive tracers. To avoid using these, a lower quality estimate of perfusion can be achieved using some ToF or injected gated MRI modalities.
We have investigated patient-specific vessel generation models together with porous model simulations in order to propose a direct model of perfusion imaging, based on the known patient data, computer flow dynamic simulations as well as experimental data consistent with known vessel and heart muscle physiology. The objective of this work is to both to provide a useful, complex forward model of perfusion image generation, and to solve the inverse problem of locating and assessing coronary diseases given a perfusion exam, even though the affected vessels may be too small to be imaged directly.
Continuing on our work from the period 2015-2019, this year we proposed a functional myocardial perfusion model consisting of the CT-derived segmented coronary vessels, a simulated vessel tree consisting of several thousands of terminal vessels, filling the myocardium in a patient-specific way, consistent with physiology data, physics-based and empirically-observed vessel growth rules, and a porous medium. We produced and validated a CFD code capable of simulating blood flow in all three coupled compartments, which allows us to simulate perfusion realistically.
The research carried out in OPIS aims at developing advanced techniques in the domain of data science for precision medicine. One of the main features of this research is to ensure that the proposed methods are not only efficient, but also grounded on sound mathematical foundations inherited from the areas of optimization and fixed point algorithms. In the biomedical domain, it appears indeed mandatory to guarantee the reliability and the explainability of the proposed approaches in their use by medical doctors or producers of medical imaging devices.
OPIS participates in the design of innovative products developed by big companies working in the domain of medical imaging (GE Healthcare and Essilor) and several startups. Various application fields are targeted (breast cancer detection, surgical radiology, interventional surgery, coronary disease monitoring, vision correction, ...).
The methodological contributions of OPIS are far reaching, with impact going further the field of medical imaging. OPIS transfers its expertise in artificial intelligence, image processing, and optimization through collaboration with major industrial partners such as SNCF, Schneider Electrics, IFPEN, and Thales. The transfer activity typically goes through CIFRE PhD contracts or more dedicated partnerships.
In addition, OPIS has active collaborations with several hospitals, particularly Institut Gustave Roussy and public hospitals from APHP in Paris. The purpose of these collaborations is to develop artificial intelligence tools aiding medical doctors in their practice. A large part of this research activity is oriented toward fighting against cancer using different kinds of data (CT scans, MRI, genomic data, histopathology images,...). OPIS was also involved in several projects for helping to better diagnose and cure COVID-19 infection.
Web site: Prox Repository
Web site: PINK
Web site: Vivabrain AngioTK toolkit
Web site: imview
Web site: TCGA segmentation
Web site: ScanCovIA
Web site: Graphical inference in linear-Gaussian state-space models
Web site: Matrix factorization for drug discovery
Web site: Joint registration tumor segmentation
Web site: PMCNet
Web site: U-HQ
Web site: FAENet
Web site: SJLR
Web site: GLIE
Participants:
In the context of large-scale, differentiable optimization, an important class of methods relies on the principle of majorization-minimization (MM). MM algorithms are becoming increasingly popular in signal/image processing and machine learning. MM approaches are fast, stable, require limited manual settings, and are often preferred by practitioners in application domains such as medical imaging and telecommunications. The work 9, we give conditions under which the sequence generated by the resulting block majorize-minimize subspace algorithm converges to a critical point of the objective function, in the non-convex setting. In 6, we investigate an asynchronous MM algorithm for solving large scale differentiable non-convex optimization problems arising in microscopy. In the book chapter 64, we highlight how MM approaches are fundamental tools, for solving optimization problems arising in source separation for physics and chemistry applications.
Participants:
In inverse problems such as X-ray computed tomography, the applicability of the algorithm is dominated by the cost of applying the forward linear operator and its adjoint at each iteration. In practice, the adjoint operator is thus often replaced by an alternative operator with the aim to reduce the overall computation burden and potentially improve conditioning issues. In 8, 70, we analyze the effect of such an adjoint mismatch on the convergence of a large set of primal-dual proximal algorithms. We derive conditions under which convergence of the algorithms to a fixed point is guaranteed. We also derive bounds on the error between this point and the solution to the original minimization problem. We illustrate our theoretical findings on image reconstruction tasks in computed tomography.
Participants:
While model-based iterative methods can be used for solving inverse problems arising in image processing, their practicability might be limited due to tedious parameterization and slow convergence. In addition, inadequate solutions can be obtained when the retained priors do not perfectly fit the solution space. Deep learning methods offer an alternative approach that is fast, leverages information from large data sets, and thus can reach high reconstruction quality. However, these methods usually rely on black boxes not accounting for the physics of the imaging system, and their lack of interpretability is often deplored. At the crossroads of both methods, unfolded deep learning techniques have been recently proposed. They incorporate the physics of the model and iterative optimization algorithms into a neural network design, leading to superior performance in various applications.
In 27, we address the problem of image reconstruction for region-of-interest (ROI) computed tomography (CT). We introduced a novel, unfolded deep learning approach called U-RDBFB designed for ROI CT reconstruction from limited data. Few-view truncated data are efficiently handled thanks to a robust non-convex data fidelity function combined with sparsity inducing regularization functions. Iterations of a block dual forward-backward algorithm, embedded in an iterative reweighted scheme, are then unrolled over a neural network architecture, allowing the learning of various parameters in a supervised manner. Our experiments show an improvement over various state-of-the-art methods, including model-based iterative schemes, deep learning architectures, and deep unfolding methods.
In 16, we propose a deep neural network based on unrolling a Half-Quadratic algorithm to address the problem of sparse signal reconstruction arising in analytical chemistry. This allows us to build interpretable layers mirroring iterations, making it possible to learn automatically data-driven hyperparameters such as regularization and stepsizes. Furthermore, we propose a dictionary of custom activation functions derived from potentials used in the original variational model. This interpretation of activations can be useful for analyzing the stability of neural networks. The efficiency of our method in comparison to iterative and learning-based methods is showcased through various experiments conducted on realistic mass spectrometry databases with various blur kernels and noise levels. Deep unrolling of primal-dual proximal algorithms has also been considered in 72, for the same application context.
Participants:
We investigate image restoration approaches in the context of the development of novel laser strategies in multiphoton microscopy (MPM). The resolution of the MPM device is quantified by a procedure of point‐spread‐function (PSF) assessment led by an original, robust, and reliable computational approach. The estimated values for the PSF width are shown to be comparable to standard values found in optical microscopy. Advanced optimization methods taking advantage of modern multicores computing devices have been developed 6. This allows to deduce a new instrumental and computational pipeline for MPM of biomedical structures. We demonstrate in 13 the interest of our pipeline for imaging bacteria without any labelling process. In 65, we present a novel approach for addressing the MPM image restoration inverse problem in an end-to-end fashion. Our comprehensive restoration pipeline revisits the conventional restoration protocol from acquisition to the final restored outcome.
Participants:
Finding the global minimum of a nonconvex optimization problem is a notoriously hard task appearing in numerous applications, from signal processing to machine learning. In 44, we introduce a new simulated annealing approach that selects the cooling schedule on the fly. Starting from a variational formulation of the problem of joint temperature and proposal adaptation, we derive an alternating Bregman proximal algorithm to minimize the resulting cost, obtaining the sequence of Boltzmann distributions and proposals.
Participants:
Adaptive importance sampling (AIS) methods are increasingly used for the approximation of distributions and related intractable integrals in the context of Bayesian inference. In 20, we propose a novel algorithm PMCnet that includes an efficient AIS mechanism, to efficiently explore the highly multimodal posterior distribution involved in the training of Bayesian Neural Networks. Numerical results illustrate the excellent performance and the improved exploration capabilities of PMCnet for the training of both shallow and deep neural networks. In 15, we propose an AIS method, called GRAMIS, that iteratively improves the set of proposals by exploiting geometric information of the target to adapt the location and scale parameters of those proposals. A repulsion term is introduced that favors a coordinated exploration of the state space. We provide a theoretical justification of the repulsion term and show the good performance of GRAMIS in problems where the target cannot be easily approximated by a standard uni-modal proposal.
Participant:
The discovery of drug-target interactions (DTIs) is a very promising area of research with great potential. The accurate identification of reliable interactions among drugs and proteins via computational methods, which typically leverage heterogeneous information retrieved from diverse data sources, can boost the development of effective pharmaceuticals.
In 21, we extend the previous approach by incorporating expert knowledge metadata and deep factorization term in our formulation. Our algorithm is applied to in silico predictions of anti-virals for monkeypox. In 19, we propose a Siamese-like architecture with two processing channels' networks based on deep convolutional transform learning for drug-drug interaction prediction.
We recently focused on computational models for repurposing drugs with the potential for treating drug resistant bacterial infections. In 66, we produced a dataset for drug-bacteria associations (DBA) that affects humans, and conduct genomic similarity computations for all known bacteria impacting humans and assess structural similarities for all antibiotic drugs.
Participants:
Modeling and inference with multivariate sequences is central in a number of signal processing applications such as acoustics, social network analysis, biomedical, and finance, to name a few. The linear-Gaussian state-space model is a common way to describe a time series through the evolution of a hidden state, with the advantage of presenting a simple inference procedure due to the celebrated Kalman filter. A fundamental question when analyzing multivariate sequences is the search for relationships between their entries (or the modeled hidden states), especially when the inherent structure is a non-fully connected graph. In such context, graphical modeling combined with parsimony constraints allows to limit the proliferation of parameters and enables a compact data representation which is easier to interpret by the experts.
We recently introduced a novel perspective by relating this matrix to the adjacency matrix of a directed graph, also interpreted as the causal relationship among state dimensions in the Granger-causality sense. Under this perspective, we propose in 37, a majorization-minimization algorithm for estimating the sought graph under a non-convex sparsity prior. In 31, we illustrate the benefits of the proposed graph model and inference technique over standard Granger causality methods, on challenging climate problems. In 79, we specialize the proposed methodology to the problem of stock forecasting, by proposing an online training strategy and a probabilistic assessment of the trading decision. A multi-layer model is considered in 28, to address the prediction of day-ahead crypto-currency prices.
Participants:
The core of many approaches for the resolution of variational inverse problems arising in signal and image processing consists of promoting the sought solution to have a sparse representation in a well-suited space. A crucial task in this context is the choice of a good sparsity prior that can ensure a good trade-off between the quality of the solution and the resulting computational cost.
In 67, we propose a novel nonsmooth and nonconvex variational formulation of the problem of joint problem of reconstruction/feature extraction. For this purpose, we introduce a versatile generalised Gaussian prior whose parameters, including its exponents, are space-variant. Secondly, we design an alternating proximal-based optimisation algorithm that efficiently exploits the structure of the proposed nonconvex objective function. We also analyze the convergence of this algorithm. As shown in numerical experiments conducted on joint segmentation/deblurring tasks, the proposed method provides high-quality results.
In 30, we focus on the inverse problem of joint trend removal and blind deconvolution of sparse peak-like signals. It blends a parsimonious prior with the hypothesis that smooth trend and noise can somewhat be separated by low-pass filtering. We combine the generalized quasi-norm ratio SOOT/SPOQ sparse penalties with the BEADS ternary assisted source separation algorithm. This results in a both convergent and efficient tool, with a novel Trust-Region block alternating variable metric forward-backward approach.
Participants:
Ensemble learning leverages multiple models (i.e., weak learners) on a common machine learning task to enhance prediction performance. Basic ensembling approaches average the weak learners outputs, while more sophisticated ones stack a machine learning model in between the weak learners outputs and the final prediction. The work 81 fuses both aforementioned frameworks. We introduce an aggregated f-average (AFA) shallow neural network which models and combines different types of averages to perform an optimal aggregation of the weak learners predictions. We emphasise its interpretable architecture and simple training strategy, and illustrate its good performance on the problem of few-shot class incremental learning.
Participants:
Let us recall that a neural network based lifting decomposition, where the prediction and updates steps are performed using an FCNN, has been proposed in 2021. While the different FCNN models have been separately learned in this work, we have recently proposed novel joint learning approaches to find the optimal FCNN models in 11. The latter aim to learn both the FCNN prediction and update models simultaneously. To this end, a multi-level optimization technique has been proposed. This technique consists in interpreting the lifting-based multiresolution decomposition as a full architecture whose involved FCNN models are globally learned at the same time through a unique loss function. In this respect, two new loss functions are investigated. While the first one resorts to a weighted sum of the loss functions used to optimize the prediction and update stages, the second one aims to obtain a good approximation to rate-distortion functions.
Participants:
We introduce in 25, ABBA networks, a novel class of (almost) non-negative neural networks, which are shown to possess a series of appealing properties. In particular, we demonstrate that these networks are universal approximators while enjoying the advantages of non-negative weighted networks. We derive tight Lipschitz bounds both in the fully connected and convolutional cases. We propose a strategy for designing ABBA nets that are robust against adversarial attacks, by finely controlling the Lipschitz constant of the network during the training phase. We show that our method outperforms other state-of-the-art defenses against adversarial white-box attackers. Experiments are performed on image classification tasks on four benchmark datasets.
We introduce in 26 a novel approach for building a robust Automatic Gesture Recognition system based on Surface Electromyographic (sEMG) signals, acquired at the forearm level. Our main contribution is to propose new constrained learning strategies that ensure robustness against adversarial perturbations by controlling the Lipschitz constant of the classifier. We focus on positive neural networks for which accurate Lipschitz bounds can be derived, and we propose different spectral norm constraints offering robustness guarantees from a theoretical viewpoint. Experimental results on two distinct datasets highlight that a good trade-off in terms of accuracy and performance is achieved. We then demonstrate the robustness of our models, compared to standard trained classifiers in three scenarios, considering both white-box and black-box attacks.
Participants:
A neural network (NN) approach is introduced in 29 to estimate the nonnoisy speed and torque from noisy measured currents and voltages in induction motors with variable speed drives. The proposed estimation method is comprised of a neural speed-torque estimator and a neural signal denoiser. A new training strategy is introduced that combines large amount of simulated data and a small amount of real-world data. The proposed denoiser does not require nonnoisy ground-truth data for training, and instead uses classification labels that are easily generated from real-world data. This approach improves upon existing noise removal techniques by learning to denoise as well as classify noisy signals into static and dynamic parts. The proposed NN-based denoiser generates clean estimates of currents and voltages that are then used as inputs to the NN estimator of speed and torque. Extensive experiments show that the proposed joint denoising-estimation strategy performs very well on real data benchmarks. The proposed denoising method is shown to outperform several widely used denoising methods and a proper ablation study of the proposed method is conducted.
Participant:
Classification has been the focal point of research on adversarial attacks, but only a few works investigate methods suited to denser prediction tasks, such as semantic segmentation. The methods proposed in these works do not accurately solve the adversarial segmentation problem and, therefore, overestimate the size of the perturbations required to fool models. In 53, we propose a white-box attack for these models based on a proximal splitting to produce adversarial perturbations with much smaller
Participants:
Radiology in medicine is a major domaine of research and applications in Artificial Intelligence. A critical aspect in radiology for cancer is the development of new and effective criteria for the assessment of patients based on imaging data in particular. One widely use such criteria in the context of immunotherapy are RECIST and iRECIST, to evaluate certain types of atypical responses observed under immunotherapy, namely pseudoprogression. They are still being evaluated, and aim at a consistent record of responses to therapies. One drawback is of this criterion is an inescapable delay of one month between a baseline and second MRI or scanner to assess lesion progression.
We also exhibited a correlation between total metastatic volume as measured on CT scans and the presence of ctDNA (circulant tumoral DNA) in liquid biopsies on a prospective study of more than 1000 patients. This is important because liquid biopsies have the potential to be cheaper, faster and easier to interpret than CT scans, particularly in countries where CT scanner are scarse 60.
Some of our early investigative efforts on the prediction of patient outcome regarding the treatment of cancer patients with immunotherapy was published in 61. In 12 we proposed to estimate the 3D fat and muscle masses as prognostic markers. We continued our work on the diagnosis and prognosis of Spondylo-Arthritis, which is a debilitiating auto-immune disease difficult to diagnose on MRI data of the pelvis 5. The use of AI provides an additional opinion and also an ability to diagnose the illness by non-specialist clinicians, alleviating the need for rare expertise in this type of imaging. This year we have experienced difficulties in getting access to the AP-HP datacenter but managed to continue our investigation efforts. We also recently validated a deep-learning methods used for musculo-skeletal imaging to help diagnose a relatively rare disease: axial spondylo-arthritis, an auto-immune disease, on a cohort provided by UCB Pharma 76 with good results.
Participants:
Deep learning has enabled a lot of progress in computer vision tasks in the last 10 years. However, a widely acknowledge element is that deep network results are not always stable or easy to interpret. Also deep network require significant computing resources in both computation and memory, often requiring graphical processing units (GPUs) for both training and inference. A recent popular topic of interest is to study whether neural networks can be drastically simplified using only binary weights. By construction this would also regularize networks. There can also be a benefit in interpretability because complex networks would essentially be learning compositions of mathematical morphology operators, which are often subject to expected mathematical behaviour.
However, binary networks are difficult to train. In this work we defined a notion of binary morphological neuron and we build neural networks to use these as their building blocks instead of convolutions. This makes scientific sense since using morphological operators can be thought as using activated linear filters, which is a basic construction mechanism in most CNNs. In this manner, training can be performed naturally using existing frameworks (e.g. pytorch), then when training is completed, the weights can be binarized with a simple method without loss of performance. Interestingly, recent work has shown that binarized networks can be trained using an effective subgradient descent method using a proximal operator interpretation. This is a very promising avenue of research, and we are currently working on extending this work to more complex networks and to other types of morphological operators.
We have shown in 34 that these networks can indeed learn sequences of operators as well as perform classification tasks with good results.
Participants:
We have started a strong collaboration with NAIST on the use of generative methods for the extimation of osteo-porosis from plain X-Ray images. This is a major public health issue, and the use of AI methods can help alleviate the need for expensive and invasive exams such as DEXA (dual energy X-ray absorptiometry) 43.
Participants: Loïc Le Bescond, Maria Vakalopoulou, Stergios Christodoulidis and Hugues Talbot with Marvin Lerousseau, Ingrid Garberis and Fabrice André, Gustave-Roussy.
Whole-slide image analysis is a challenging area of research in medical imaging. The goal is to analyze a whole slide of tissue, which can be several gigapixels in size, and to extract relevant information from it. This is a challenging task because of the size of the images, but also because of the complexity of the tissue, which can contain many different types of cells, and because of the presence of artefacts such as folds, tears, or stains.
This year we published a major analysis conducted at Gustave-Roussy concerning the response of some breast cancers with variable genetic expression to a novel immunotherapy regimen. This was published in 23 and potential to be highly influential in the field of breast cancer treatment.
We also published a novel study on the importance of Out-of-Distribution samples in the context of Multiple-Instance Learning, a methodology that can help learning methods to cope with the very large size of the data, by considering only samples in a bag, rather than individual samples. This work was published in 75 and is being prepared for journal publication.
Participants:
The robustness of image segmentation has been an important research topic in the past few years as segmentation models have reached production-level accuracy. However, like classification models, segmentation models can be vulnerable to adversarial perturbations, which hinders their use in critical-decision systems like healthcare or autonomous driving. Recently, randomized smoothing has been proposed to certify segmentation predictions by adding Gaussian noise to the input to obtain theoretical guarantees. However, this method exhibits a trade-off between the amount of added noise and the level of certification achieved. In this paper, we address the problem of certifying segmentation prediction using a combination of randomized smoothing and diffusion models. We performed extensive experiments to prove the effectiveness of the method in computer vision 32, and medical imaging 47.
Participant:
Whole slide image (WSI) classification is a critical task in computational pathology, requiring the processing of gigapixel-sized images, which is challenging for current deep-learning methods. The current state of the art methods are based on multi-instance learning schemes (MIL), which usually rely on pretrained features to represent the instances. Due to the lack of task-specific annotated data, these features are either obtained from well-established backbones on natural images, or, more recently from self supervised models pretrained on histopathology. However, both approaches yield task-agnostic features, resulting in performance loss compared to the appropriate task-related supervision, if available. In 57, we show that when task specific annotations are limited, we can inject such supervision into downstream task training, to reduce the gap between fully task-tuned and task agnostic features. We propose Prompt-MIL, an MIL framework that integrates prompts into WSI classification. Prompt-MIL adopts a prompt tuning mechanism, where only a small fraction of parameters calibrates the pretrained features to encode task-specific information, rather than the conventional full fine-tuning approaches. Moreover, in 58, we adapt SAM for semantic segmentation by first introducing trainable class prompts, followed by further enhancements through the incorporation of a pathology encoder, specifically a pathology foundation model. Our framework, SAM-Path enhances SAM's ability to conduct semantic segmentation in digital pathology without human input prompts.
Participants:
Renal transplantation emerges as the most effective solution for end-stage renal disease. Occurring from complex causes, a substantial risk of transplant chronic dysfunction persists and may lead to graft loss. Medical imaging plays a substantial role in renal transplant monitoring in clinical practice. However, graft supervision is multi-disciplinary, notably joining nephrology, urology, and radiology, while identifying robust biomarkers from such high-dimensional and complex data for prognosis is challenging. In this work, taking inspiration from the recent success of Large Language Models (LLMs), we propose MEDIMP – Medical Images with clinical Prompts – a model to learn meaningful multi-modal representations of renal transplant Dynamic Contrast-Enhanced Magnetic Resonance Imaging by incorporating structural clinico biological data after translating them into text prompts. MEDIMP [milecki:hal-04040697] is based on contrastive learning from joint text-image paired embeddings to perform this challenging task. Moreover, we propose a framework that generates medical prompts using automatic textual data augmentations from LLMs. Our goal is to learn meaningful manifolds of renal transplant DCE MRI, interesting for the prognosis of the transplant or patient status (2, 3, and 4 years after the transplant), fully exploiting the limited available multi-modal data most efficiently. Extensive experiments and comparisons with other renal transplant representation learning methods with limited data prove the effectiveness of MEDIMP in a relevant clinical setting, giving new directions toward medical prompts.
Participant:
We leverage in 4, path differentiability and a recent result on nonsmooth implicit differentiation calculus to give sufficient conditions ensuring that the solution to a monotone inclusion problem will be path differentiable, with formulas for computing its generalized gradient. A direct consequence of our result is that these solutions happen to be differentiable almost everywhere. Our approach is fully compatible with automatic differentiation and comes with assumptions which are easy to check, roughly speaking: semialgebraicity and strong monotonicity. We illustrate the scope of our results by considering three fundamental composite problem settings: strongly convex problems, dual solutions to convex minimization problems and primal-dual solutions to min-max problems.
Participants:
Applications of machine learning techniques for materials modeling typically involve functions known to be equivariant or invariant to specific symmetries. While graph neural networks (GNNs) have proven successful in such tasks, they enforce symmetries via the model architecture, which often reduces their expressivity, scalability and comprehensibility. In our recent work 39, we have introduced (1) a flexible framework relying on stochastic frame-averaging (SFA) to make any model E(3)-equivariant or invariant through data transformations. (2) FAENet: a simple, fast and expressive GNN, optimized for SFA, that processes geometric information without any symmetrypreserving design constraints. We have shown the validity of our method theoretically, and have empirically demonstrated its superior accuracy and computational scalability in materials modeling on the OC20 dataset (S2EF, IS2RE) as well as common molecular modeling tasks (QM9, QM7-X).
Participants:
The recovery of time-varying graph signals is a fundamental problem with numerous applications in sensor networks and forecasting in time series. Effectively capturing the spatio-temporal information in these signals is essential for the downstream tasks. Previous studies have used the smoothness of the temporal differences of such graph signals as an initial assumption. Nevertheless, this smoothness assumption could result in a degradation of performance in the corresponding application when the prior does not hold. In this work, we relax the requirement of this hypothesis by including a learning module. In our recent work, we have proposed a Time Graph Neural Network (TimeGNN) for the recovery of time-varying graph signals 36. Our algorithm uses an encoder-decoder architecture with a specialized loss composed of a mean squared error function and a Sobolev smoothness operator.TimeGNN has shown competitive performance against previous methods in real datasets.
Participant:
GNNs have succeeded in various computer science applications, yet deep GNNs underperform their shallow counterparts despite deep learning's success in other domains. Over-smoothing and over-squashing are key challenges when stacking graph convolutional layers, hindering deep representation learning and information propagation from distant nodes. Our recent work reveals that over-smoothing and over-squashing are intrinsically related to the spectral gap of the graph Laplacian, resulting in an inevitable trade-off between these two issues, as they cannot be alleviated simultaneously 42. To achieve a suitable compromise, we have proposed adding and removing edges as a viable approach. We introduce the Stochastic Jost and Liu Curvature Rewiring (SJLR) algorithm, which is computationally efficient and preserves fundamental properties compared to previous curvature-based methods. Unlike existing approaches, SJLR performs edge addition and removal during GNN training while maintaining the graph unchanged during testing. Comprehensive comparisons have demonstrated SJLR's competitive performance in addressing over-smoothing and over-squashing.
Participant:
Capturing higher-order relationships between nodes is crucial to increase the expressive power of GNNs. However, existing methods to capture these relationships could be infeasible for large-scale graphs. In our work, we have introduced a new higher-order sparse convolution based on the Sobolev norm of graph signals 41. Our Sparse Sobolev GNN (S-SobGNN) computes a cascade of filters on each layer with increasing Hadamard powers to get a more diverse set of functions, and then a linear combination layer weights the embeddings of each filter. We have evaluated S-SobGNN in several applications of semi-supervised learning, showing competitive performance compared to several state-of-the-art methods.
Participant:
Influence maximization is a well-studied combinatorial optimization problem. Finding the seed set that maximizes the influence spread over a network is known to be an NP-hard problem. Though a greedy algorithm can provide near-optimal solutions, the subproblem of influence estimation renders the solutions inefficient. In our recent work, we have proposed GLIE, a graph neural network that learns how to estimate the influence spread of the independent cascade 51. GLIE relies on a theoretical upper bound that is tightened through supervised training. Experiments indicate that it provides accurate influence estimation for real graphs up to 10 times larger than the train set. Subsequently, we have incorporated it into two influence maximization techniques. The proposed algorithms are inductive, meaning they are trained on graphs with less than 300 nodes and up to 5 seeds, and tested on graphs with millions of nodes and up to 200 seeds. The final method exhibits the most promising combination of time efficiency and influence quality, outperforming several baselines.
MAJORIS project on cordis.europa.eu
Mathematical optimization is the key to solving many problems in science, based on the observation that physical systems obey a general principle of least action. While some problems can be solved analytically, many more can only be solved via numerical algorithms. Research in this domain has proved essential over many years. In addition, science in general is changing. Increasingly, in biology, medicine, astronomy, chemistry, physics, large amounts of data are collected by constantly improving signal and image acquisition devices, that must be analyzed by sophisticated optimization tools. In this proposal, we consider handling optimization problems with large datasets. This means minimizing a cost function with a complex structure and many variables. The computational load for solving these problems is too great for even state-of-the-art algorithms. Thus, only relatively rudimentary data processing techniques are employed, reducing the quality of the results and limiting the outcomes that can be achieved via these novel instruments. New algorithms must be designed with computational scalability, robustness and versatility in mind.
In this context, Majorization-Minimization (MM) approaches have a crucial role to play. They consist of a class of efficient and effective optimization algorithms that benefit from solid theoretical foundations. The MAJORIS project aims at proposing a breakthrough in MM algorithms, so that they remain efficient when dealing with big data. I propose to tackle several challenging questions concerning algorithm design. These include acceleration strategies, convergence analysis with complex costs and inexact schemes. I will also tackle practical, massively parallel and distributed architecture implementations. Three specific applications are targeted: super-resolution in multiphoton microscopy in biology; on-the-fly reconstruction for 3D breast tomosynthesis in medical imaging; and mass spectrometry data processing in chemistry.
Participants:
The PRISM program at Gustave-Roussy, which is a major research program on precision medicine, has been funded by the ANR since 2018 by a grant of €5 millions. This was considered a major achievement for the team, as it has allowed us to continue our research on the use of AI for precision medicine.
It has now received Institut Hospitalo-Universitaire (IHU) label. The vision of the project is transformative in its approach for cancer treatment. It aims to better understand the biology of each patient’s cancer and to identify, from diagnosis, those with the most aggressive tumours in order to offer them the most appropriate treatment. This IHU label is part of the perspective of making Gustave Roussy the largest campus in Europe dedicated to cancer.
The PRISM program has become one of the 5 IHU endowed with 30 to 40 million euros announced by the Government as part of the 3rd call for projects of the France 2030 plan. The objective of the IHUs is to strengthen French medical research capacity by developing world-class research (clinical and translational) skills involving university, health establishment, research organizations and companies.
PRISM is the result of several years of research conducted by the teams of Gustave Roussy in partnership with CentraleSupélec, Université Paris-Saclay, Inserm and Unicancer.
Data challenges are a major way by which AI results can be publicized and become visible.
The members of the team reviewed numerous papers for several international conferences, such as for the annual conferences on Computer Vision and Pattern Recognition (CVPR),Medical Image Computing and Computer Assisted Intervention (MICCAI), Neural Information Processing Systems (NeurIPS), International Conference on Learning Representations (ICLR), IEEE International Conference and Acoustics Speech and Signal Processing (ICASSP), IEEE International Conference on Image Processing (ICIP), IEEE Statistical Signal Processing workshop (SSP), European Signal Processing Conference (EUSIPCO), AAAI Conference on Artificial Intelligence (AAAI), The Web Conference (WWW), Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), International Conference on Web and Social Media (ICWSM), International Conference on Machine Learning (ICML), Conference on Neural Information Processing Systems (NeurIPS), International Conference on Complex Networks and Their Applications (Complex Networks), InternationalWorkshop on Graph-Based Natural Language Processing (TextGraphs), Artificial Intelligence and Statistics Conference (AIStat), British Machine Vision Conference, Montreal AI Symposium, ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD), IEEE/ACM International Conference on Advances in Social Networks Analysis andMining (ASONAM), Learning on Graphs Conference (LoG).
The members of the team participated to numerous PhD Thesis Committees, PhD “comité de suivi individuel”, HdR Committees, recruiting Committees, and served as Grant Reviewers.
Several permanent members of OPIS were involved as lecturer (lec.) or lab instructors (lab), in the following courses.
Several students members of OPIS (
The faculty members of the team serve regularly as a jury Member to Final Engineering Internship and the Research Innovation Project for students of CentraleSupélec, and to Research Internship for students of Ms.C. MVA, ENS Paris Saclay.