Morpheme is a joint project between INRIA, CNRS and the University of Côte d'Azur (UniCA); Signals and Systems Laboratory (I3S) (UMR 6070); the Institute for Biology of Valrose (iBV) (CNRS/INSERM).
The scientific objectives of Morpheme are to characterize and model the development and the morphological properties of biological structures from the cell to the supra-cellular scale. Being at the interface between computational science and biology, we plan to understand the morphological changes that occur during development combining in vivo imaging, image processing and computational modeling.
The morphology and topology of mesoscopic structures, indeed, do have a key influence on the functional behavior of organs. Our goal is to characterize different populations or development conditions based on the shape of cellular and supra-cellular structures, including micro-vascular networks and dendrite/axon networks. Using microscopy or tomography images, we plan to extract quantitative parameters to characterize morphometry over time and in different samples. We will then statistically analyze shapes and complex structures to identify relevant markers and define classification tools. Finally, we will propose models explaining the temporal evolution of the observed samples. With this, we hope to better understand the development of normal tissues, but also characterize at the supra-cellular level different pathologies such as the Fragile X Syndrome, Alzheimer or diabetes.
The recent advent of an increasing number of new microscopy techniques giving access to high throughput screenings and micro or nano-metric resolutions provides a means for quantitative imaging of biological structures and phenomena. To conduct quantitative biological studies based on these new data, it is necessary to develop non-standard specific tools. This requires using a multi-disciplinary approach. We need biologists to define experiment protocols and interpret the results, but also physicists to model the sensors, computer scientists to develop algorithms and mathematicians to model the resulting information. These different expertises are combined within the Morpheme team. This generates a fecund frame for exchanging expertise, knowledge, leading to an optimal framework for the different tasks (imaging, image analysis, classification, modeling). We thus aim at providing adapted and robust tools required to describe, explain and model fundamental phenomena underlying the morphogenesis of cellular and supra-cellular biological structures. Combining experimental manipulations, in vivo imaging, image processing and computational modeling, we plan to provide methods for the quantitative analysis of the morphological changes that occur during development. This is of key importance as the morphology and topology of mesoscopic structures govern organ and cell function. Alterations in the genetic programs underlying cellular morphogenesis have been linked to a range of pathologies.
Biological questions we will focus on include:
Our goal is to characterize different populations or development conditions
based on the shape of cellular and supra-cellular structures, e.g. micro-vascular
networks, dendrite/axon networks, tissues from 2D, 2D+t, 3D or 3D+t images (obtained with confocal microscopy, video-microscopy, photon-microscopy or micro-tomography). We plan to extract shapes or quantitative parameters to characterize the morphometric properties of different samples. On the one hand, we will
propose numerical and biological models explaining the temporal evolution of the
sample, and on the other hand, we will statistically analyze shapes and complex
structures to identify relevant markers for classification purposes. This should
contribute to a better understanding of the development of normal tissues but
also to a characterization at the supra-cellular scale of different pathologies such
as Alzheimer, cancer, diabetes, or the Fragile X Syndrome.
In this multidisciplinary context, several challenges have to be faced. The
expertise of biologists concerning sample generation, as well as optimization of
experimental protocols and imaging conditions, is of course crucial. However,
the imaging protocols optimized for a qualitative analysis may be sub-optimal
for quantitative biology. Second, sample imaging is only a first step, as we need
to extract quantitative information. Achieving quantitative imaging remains an
open issue in biology, and requires close interactions between biologists, computer
scientists and applied mathematicians. On the one hand, experimental and imaging protocols should integrate constraints from the downstream computer-assisted
analysis, yielding to a trade-off between qualitative optimized and quantitative optimized protocols. On the other hand, computer analysis should integrate constraints specific to the biological problem, from acquisition to quantitative information extraction. There is therefore a need of specificity for embedding precise
biological information for a given task. Besides, a level of generality is also desirable for addressing data from different teams acquired with different protocols
and/or sensors.
The mathematical modeling of the physics of the acquisition system will yield
higher performance reconstruction/restoration algorithms in terms of accuracy.
Therefore, physicists and computer scientists have to work together. Quantitative
information extraction also has to deal with both the complexity of the structures of interest (e.g., very dense network, small structure detection in a volume,
multiscale behavior,
Among the applications addressed by Morpheme team we can cite:
Image super-resolution techniques exploiting the stochastic fluctuations of image intensities have become a powerful tool in fluorescence microscopy. Compared to other approaches, these techniques can be applied under standard acquisition settings and do not require special microscopes nor fluorophores. Most of these approaches can be mathematically modeled making use of second-order statistics possibly combined with a priori regularization on the desired solution. In this work, we consider a different paradigm and formulate a physical-inspired data-driven approach based on generative learning. By simulating fluorescence and noise fluctuations by means of a suitable double Poisson-type process, the unknown distribution of the fluctuating sequence of low- resolution and noisy images is approximated via a GAN-type approach (Generative Adversial Network)where both physical and network parameters are optimized, see 2. Compared to this previous work, we use a double Poisson process to simulate fluctuations of fluorophores on one side and noise due to ambiant fluorophores on the other side. We also provide theoretical insights on the choice of the corresponding cost functionals and gradient computations, and assess practical performance on simulated Argolight (see figure 1).
We studied stochastic gradient descent (SGD) algorithm for solving linear inverse problems (e.g., CT image reconstruction) in the Banach space framework of variable exponent Lebesgue spaces
The work is published in 22.
Within the framework of inverse problems, modern approaches in fluorescence microscopy reconstruct a super-resolved image from a temporal stack of frames by carefully designing suitable hand-crafted sparsity-promoting regularizers. Numerically, such approaches are solved by proximal gradient-based iterative schemes. Aiming at obtaining a reconstruction more adapted to sample geometries (e.g. thin filaments), we adopt a plug-and-play denoising approach with convergence guarantees and replace the proximity operator associated with the explicit image regularizer with an image denoiser (i.e. a pre-trained denoising network) which, upon appropriate training, mimics the action of an implicit prior. To account for the independence of the fluctuations between molecules, the model relies on second-order statistics. The denoiser is then trained on covariance images coming from data representing sequences of fluctuating fluorescent molecules with filament structure. The method is evaluated on both simulated and real fluorescence microscopy images, showing its ability to correctly reconstruct filament structures with high values of peak signal-to-noise ratio (see Figure 3).
The work is published in 23.
Recent years have seen the development of super-resolution variational optimization in measurement spaces. These so-called "gridless" approaches offer both theoretical (uniqueness, reconstruction guarantees) and numerical results, with very convincing results in biomedical imaging. However, gridless variational optimization is formulated for the reconstruction of point sources, which is not always suitable for biomedical imaging applications: more realistic biological structures such as curves, to represent blood vessels or filaments, would also need to be reconstructed.
In this work, we developed a new approach for gridless reconstruction of curves, understood as the reconstruction of a vector measure
First numerical implementation is illustrated in Figure 4. A new data term for CROC has been proposed linking scalar observed data
This work is published in 20, 21.
A code is available at 6.1.5
Growing evidence indicates that only a sparse subset from a pool of sensory neurons is active for the encoding of visual stimuli at any instant in time. Traditionally, to replicate such biological sparsity, generative models have been using the
The results are published in 19.
The code is available at github.com/rentzi.
Plankton organisms are a key component of the Earth bio-sphere. The phytoplankton captures the CO2 of the atmosphere, and the zooplankton, as a phytoplankton predator, aggregates it and stores it on the ocean floor when it dies. This so-called carbon pump is studied by ecologists in order to predict its future efficiency in a climate change era. A modern approach consists in studying the relation between the living environment of the organisms and their functional traits, among which the size stands out. Indeed, a high correlation has been observed between zooplankton size and carbon sequestration. Scientists have developed in situ imaging instruments and built large databases. Taxonomic identification from images is required to estimate the individual volumes of the organisms based on their morphology. The development of automated classification methods has been essential to aid operators in their identification task. The breakthrough of Artificial Neural Network (ANN) made it possible to come up with efficient classifiers. However, the decisions of such classifiers are hard to interpret or explain. We put forward the idea that following the transform-then-classify approach of ANNs using a simple, explicit transform can result in a classifier whose predictions are both interpretable (thus, trustable) and accurate. The proposed transform is defined as a linear combination of per-class target vectors, and the classification is performed, like with ANNs, by a nearest-target decision (see Figure 6). Furthermore, as a main theoretical result, we establish that the proposed transform defines a kernel associated with the Weigthed-Nearest-Neighbors (wNN) classifier. Incidentally, this allows to interpret the wNN classifier as a member of the transform-then-classify family of classifiers.
The detection of cell division and cell death events in live-cell assays has the potential to produce robust metrics of drug pharmacodynamics and return a more comprehensive understanding of tumor cells responses to cancer therapeutic combinations. As cancer drugs may have complex and mixed effects on the biology of the cell, knowing precisely when cellular events occur in a live-cell experiment allows to study the relative contribution of different drug effects (cytotoxic/cell death or cytostatic/cell division) on a cell population. Yet, classical methods require dyes (fluorescent molecules) to measure cell viability as an end-point assay, where the proliferation rates can only be estimated when both viable and dead cells are labeled simultaneously, not to mention that the cell division events are often discarded due to analytical limitations. Live-cell imaging is a promising cell-based assay to determine drug efficacies. However, its main limitation remains the accuracy and depth of the analyses, to acquire automatic measures of the cellular response phenotype, making the understanding of drug action on cell populations difficult. We propose a new algorithmic architecture integrating machine learning, image and signal processing methods to perform dynamic image analyses of single cell events in time-lapse microscopy experiments of drug pharmacological profiling. Our event detection method is based on a pattern detection approach on the local entropy in polarized light frames, making it free of any labeling step and exhibiting two distinct patterns for cell division and death events. Our analysis framework is an open source and adaptable workflow that automatically predicts cellular events (and their times) from each single cell trajectory, along with other classic cellular features of cell image analyses, as a promising solution in pharmacodynamics (see Figure 7). This project is currently supported by two actions studying the possibility of launching a startup company (Young Entrepreneur Program, Labex SignaLife and Emergence et Accompagnement, Canceropôle PACA).
This work was made in collaboration with Fabienne de Graeve, iBV, Nice, France.
Recent technological advances in bioimaging have enabled biologists to investigate dynamic processes in living cells at high spatial and temporal resolutions. Quantitative analysis of these dynamic processes relies on the development of appropriate tracking methods associated with a spatio-temporal analysis. We developed particle tracking and event detection methods to analyze the movements and interactions of Imp-RNP (Ribonucleoprotein) granules in the cell bodies of mushroom neurons in Drosophila brain explants. The individual granule trajectories are considered globally as a graph embedded in a space-time domain. The base granule fusion and splitting events can occur in specific sequences which build particular graph patterns. The proposed analysis consists in defining the relevant patterns, and then searching them in actual graphs (see Figure 8).
This study aimed to identify biomarkers of relapse following surgical intervention by analyzing 17 primary squamous cell carcinoma tumors (9 relapsing, 8 non-relapsing) using multiplex (11 markers) mass spectrometry imaging to investigate the interactions between different cell types and structures. We first have detected cells in the different channels using local thresholding. A k-means clustering has then exihibited different populations and in particular several macrophage subpopulations. An analysis of the cells distances with respect to the tumor and of their density in specific structures (vessels, nerves) has shown some specific localization of these populations (see Figure 9).
These findings have revealed insights into the complex links between immune cells, tumor structures and relapse. The identified B cell influx, as well as the differential behavior of macrophage subpopulations, may have implications for understanding tumor progression and designing targeted therapies to prevent relapse following surgery.
We have developed a features-based supervised algorithm for nuclei classification in histopathological images. Features are composed of color and shape properties of detected nuclei as well as the color inside a crown surrounding the nuclei. We have considered five classes of cells : sane, cancerous, fibroblast, epithelial, lymphocyte. Usign a SVM classifier we have obtained a weighted F1 score of 0.89. We then consider a labelled graph where the node are defined by the nuclei and the edges by adjacent cells in the associated Voronoi tesselation. A regularization step is applied using a local majority vote. However lymphocyte and epithelial cells are excluded from this regularization step due their low prevalence. After regularization, the Voronoi tesselation provides a segmentation of the tissue in seral regions composed of tumour, sane areas and fibrous tissues (see Figure 10). These first results will be improved by considering a regularization at a coarser scale, that is considering the different obtained regions.
This work was carried out in collaboration with Francesco Ponzio from Politechnico di Torino (Italy), Damien Ambrosetti and Giorgio Toni from Nice CHU.
The Banff classification is used by pathologists for the evaluation and monitoring of kidney implants. This classification, in association with other biological and clinical elements, makes it possible to diagnose rejection and estimate the prognosis, through the analysis of different lesions in the 4 compartments (glomeruli, tubules, interstitium and vessels). However, the numerous classification criteria and their quantification, in addition to the very time-consuming nature of their evaluation, can present problems of inter- and intra-individual reproducibility, hence the relevance of automating this evaluation. We propose an analysis methodology for this automatic evaluation. The first step is to identify and separate the different compartments. Secondly, each compartment will be analyzed individually. In this first work, the lumen areas are identified by thresholding followed by mathematical morphology operators. These areas correspond either to tubules or to vessels. To classify them we use nuclear characteristics in the neighborhood of each lumen. We therefore detect the nuclei using a shape dictionary. Each nucleus is associated with a lumen by minimizing the distance. The lumens are then classified as tubule or vessel by an SVM algorithm based on geometric parameters of area, number and shape of nuclei (see Figure 11).
This work was carried out in collaboration with Francesco Ponzio from Politechnico di Torino (Italy), Damien Ambrosetti and Giorgio Toni from Nice CHU.
Type I diabet may lead to kydney pathology consisting of a degradation of glomeruli. In this project we applied two CNN models to detect glomeruli from histopathological images (Faster R-CNN) and grade them into sane, mild, severe or necrosis stages (see Figure 12). We tested the model on two separate datasets: one from Paris and another assembled using data from Nice. The model performed well, with the majority of errors falling within adjacent classes. Although it exhibited comparatively less proficiency on the samples from Paris (as it had not encountered any data from this center during training), it still demonstrated reasonable performance. Notably, most errors were confined to adjacent classes, typically associated with ambiguous samples. We also performed an extensive hyperparameter optimization. This involved experimenting with various architectures, adjusting parameters, and implementing advanced augmentations inspired by the circular shape of glomeruli. These attempts were undertaken in an effort to balance our training dataset and mitigate overfitting.
This work was carried out in collaboration with Francesco Ponzio from Politechnico di Torino (Italy), Damien Ambrosetti from Nice CHU and Nicolas Pote from Bichat hospital.
Reinforcement learning (RL) is one of the machine learning paradigms, alongside supervised and unsupervised learning, where an agent learns from its direct interaction and manipulation of its environment through a set of actions. The agent acts based on observations known as states, and it learns using a reward signal received from the environment. We aimed to derive a RL framework to address histopathological image analysis tasks. This will be exemplified on kidney and lung cancer datasets.
The main challenges present in RL, unlike other paradigms, include the concept of delayed rewards and the necessity for exploration (where the data observed depends on the agent's trajectory) ...Our objective is to train an agent to perform diagnosis on whole slide images, similar to how pathologists do it: by zooming in and out, moving around and select patches/patterns that characterize specific tumor. Throughout the first three months of this project, we conducted an in-depth literature review in the following categories:
The study of Reflectance Confocal Microscopy (RCM) images provides information on epidermal structure, a key to skin health and barrier which changes in each epidermal layer, with age and with certain skin conditions. RCM can also reveal dynamic changes to the epidermis in response to stimuli, as it enables repetitive sampling of the skin without damaging the tissue. Studying RCM images requires manual identification of each cell to derive its geometrical and topological characteristics, which is time-consuming and subject to expert interpretation. More insights could be derived from these data if not for the tediousness of their manual segmentation, highlighting the need for an automated cell identification method. We have previously developed a new unsupervised approach based on multi-task learning to identify cells on RCM images. This approach consists of a dual-task Cycle Generative Adversarial Networks model, where the first task learns RCM images noise and/or texture model from the initial image, while the second task learns the epidermis structure from a Gabor filtering, thus allowing us to denoise RCM images while keeping the position and integrity of the membranes. To our knowledge, this is the first time a Cycle GAN has been embedded in a multi-task learning model. We have started exploring the use of this unsupervised method on images acquired using different imaging modalities and representing different tissues without retraining (histology images, cell culture images, mass spectroscopy images, and fluorescence microscopy images). Indeed, multi-task approaches tend to be less data dependent, and therefore more easily transposable to other types of data not included in the training of the approach. We developed 3 iterations of DermoGAN depending on the type of cell organization (confluent cells, non-confluent cells, and histology images), considering alternatives to the Gabor filter and to the simulated expected results, for example using Canny filter and a Marked Point Process (see Figure 14). We theorize that the performance of DermoGAN is dependent on the cell organization visible in the images used in the training, not on the imaging modality or tissue.
An international patent has been submitted for this work.
The aim of this work is to built a phenotypic map of organoids. We model an organoid as a 3D shape containing several layers of cells composed of 2D point patterns on the sphere. Our goal is to classify organoids with respect to these three components.
We propose an architecture for classifying organoids (clusters of stem cells, cf Figure 15) developed in the presence of endocrine disruptors. First, we proposed a simplified organoid model, from which we generated synthetic datasets composed of point patterns on the sphere defining a graph using the Voronoi representation. We carried out a comparative study between a classification method based on Deep Learning for graphs, and a method based on spatial statistics. We thus showed that the Deep Learning approach was at least as effective and more robust to noise than the other. We then developed an algorithm allowing the nuclei detection of organoids that have been cultured in the laboratory. In order to enable the use of Deep Learning architectures from a limited number of organoids, we used Data Augmentation techniques. We are currently developing two types of architecture (one based on convolution networks, the other on layers dedicated to graphs), to obtain a baseline. With the aim of ultimately proposing a robust and interpretable model, we seek to enrich it with a part based on the 3D analysis of the shape of the organoid, in particular via the decomposition of these shapes into spherical harmonics.
This work is on the detection and characterization of Fibronectin (FN) structures found in the tumour extracellular matrix (ECM). Three structures were identified in our multiplex fluorescent images: aggregates, representing accumulations of insoluble, non-fibrillar FN; FN fibres; and tumour cells expressing cytokeratin (Figure 16).
A greedy algorithm, leveraging the geometric assumptions we have about the aggregates — specifically, that they exhibit an elliptical resemblance — has been developed and tested. This algorithm is grounded in a marked point process model proposed by X. Descombes 4. Additionally, a modified Gabor kernel 31 has been introduced to enhance fibre detection, minimizing the blurring effect. Classical image processing techniques were applied to achieve an accurate segmentation of tumour cells (Figure 17). Ongoing efforts are directed towards characterizing the diverse components of the ECM using this segmentation.
The research activities of this project focus on the analysis of microscopy images of the extracellular matrix (ECM) in the microenvironment of head and neck cancer tumors. This project builds on previous works carried out within the team, through collaborations with clinical partners (CAL, Unicancer, Head and Neck Group) and with Fabienne Anjuère, immunologist at IPMC. This research, carried out as part of the TOPNIVO and CHECK-UP clinical trials, has led to the development of a pipeline for acquiring multiplexed immunofluorescence and HES-stained histological images. In this context, some 6,000 multispectral images and 350 HES (Whole slide images) have been acquired.
The current work focuses on analyzing the topological and geometrical changes in fiber networks of the tumor ECM (Extra-Cellular Matrix) using the developed image analysis pipelines and algorithms. We are also working on characterizing the organization of ECM proteins and finding structural biomarkers to develop predictive models for patient response to immunotherapy, This can be summarized in two main parts (see figure 18):
1- The development of deep learning algorithms for image processing (signal extraction, segmentation, and classification of biological structures of interest), which requires both expert annotation and software development so that these images, which can weigh several gigabytes, can be processed by Deep Learning.
2 - The implementation of quantitative statistics to search for spatial signatures or biomarkers in the images, and more specifically in the extracellular matrix, in order to predict patient response to immunotherapy, as well as to provide insights on the functioning of immunosuppressive functions.
It is admitted that ascidian exhibited a stereotyped development in the first stages of their embryogenesis 30. Moreover, we demonstrated that the developmental speed (characterized as the derivative of the count of cells) is identical from one embryo to the other (up to a scaling factor, the normal developmental speed being linearly correlated to the medium temperature), suggesting that the cell division rate is identical in a population of normally developing embryos (see Figure 19 left and middle).
Thanks to this consistent cell division rate amongst embryos, we can study whether the order of cell divisions is identical across embryos. For this purpose, we ordered the cells with respect to their average division time and estimate the probability that a cell with a late average division divides before a cell with an earlier division (Figure 19 right). This emphasizes the different rounds of divisions that succeed during the embryo development, with still phases at the stages of 64, 76, 112, 184 and 218 cells. It also reveals that the closer the average division times are, the higher the probability that the late one occurs before the early one.
Ascidian exhibited a stereotyped development in the first stages of their embryogenesis 30. Unpublished results demonstrate a strong reproducibility of cell naming with respect to geometrical (cell position), and topological (cell neighborhood) considerations, but have also revealed that some cells exhibit different division orientation patterns.
In this work, we aim at studying the variability of cell division, namely whether a cell division has different orientation patterns across embryos.
We study each cell division individually. First, all embryos are registered against one common reference (one of the embryo at hand), so they share a common geometric referential. Having defined the division orientation as the direction between the centers of mass of the two daughter cells, we collect the direction and compute a kernel-based direction density function (on the 3D sphere), see Figure 20. Extracting the number of modes of this density function together with the number of samples participating to each mode is a powerful means to point out cells whose division is more likely to exhibit different orientation patterns, see Figure 21.
This project is in collaboration with University of Campinas, Brasil with Joao Romano and Adilson Chinatto and with Satie Lab of University Paris-Saclay with Pascal Larzabal.
The aim of this project is the application of sparse methods in antenna processing which has been proving to be a promising alternative to standard methods. It needs to be studied in greater depth to improve its implementation and quantify its benefits. A difficult step in this approach is the tuning of the regularizing parameter which compounds the weight between the data term and the sparse regularizing term. We have worked on a process adapted to the problem of direction of arrival (DOA) in antenna communications which gives an estimation of the regularizing parameter suitable for the DOA problem.
The project NoMADS focuses on data processing and analysis techniques which can feature potentially very complex, nonlocal, relationships within the data. In this context, methodologies such as spectral clustering, graph partitioning, and convolutional neural networks have gained increasing attention in computer science and engineering within the last years, mainly from a combinatorial point of view. However, the use of nonlocal methods is often still restricted to academic pet projects. There is a large gap between the academic theories for nonlocal methods and their practical application to real-world problems. The reason these methods work so well in practice is far from fully understood. Our aim is to bring together a strong international group of researchers from mathematics (applied and computational analysis, statistics, and optimization), computer vision, biomedical imaging, and remote sensing, to fill the current gaps between theory and applications of nonlocal methods. We will study discrete and continuous limits of nonlocal models by means of mathematical analysis and optimization techniques, resulting in investigations on scale-independent properties of such methods, such as imposed smoothness of these models and their stability to noisy input data, as well as the development of resolution-independent, efficient and reliable computational techniques which scale well with the size of the input data. As an overarching applied theme we focus in particular on image data arising in biology and medicine, which offers a rich playground for structured data processing and has direct impact on society, as well as discrete point clouds, which represent an ambitious target for unstructured data processing. Our long-term vision is to discover fundamental mathematical principles for the characterization of nonlocal operators, the development of new robust and efficient algorithms, and the implementation of those in high quality software products for real-world application.
The Morpheme team belongs to the Labex (Laboratory of Excellence) Signalife. This Labex embeds four biology institutes in Nice, Sophia Antipolis and two Inria teams.
This project is in collaboration with Alessandro Lanza (University of Bologna, IT) and Thomas Moreau (Inria Saclay, FR).
In this project we propose to improve upon the intrinsic specificity and rigidity of standard bilevel learning approaches by means of a novel task-adapted highly-flexible modeling. To mitigate the computational efforts required for solving bilevel problems numerically, tailored inexact and stochastic optimization schemes will be studied. The following questions will be addressed:
The project features an international consolidated skilled team of researchers with strong expertise in applied imaging inverse problems, non-smooth & non-convex optimization and statistical image processing.
This project is a collaborative project led by Pierre Weiss (IMT, Toulouse)[PI].
Several recent revolutions in imaging rely on numerical computations. One can think of single molecule localization microscopy (Nobel Prize 2014) or cryo-electron microscopy (Nobel Prize 2017). What they have in common is the need to perform prior mathematical modeling and calibration of the system. Although they have made it possible to observe phenomena that were previously out of reach, their expansion is currently limited by an important problem: it is difficult to precisely control the imaging conditions (e.g. temperatures, wavelengths, refractive indices). This results in modeling errors that can have disastrous repercussions on the quality of the images produced. Thus, these technologies are currently reserved for a handful of research centers possessing state-of-the-art equipment and considerable interdisciplinary experience. The objective of this project is to bring new theoretical and numerical solutions to overcome these difficulties, and then to apply them to different optical microscopes. This should allow to democratize their use, to reduce their cost and the preparation time of the experiments.
The central idea is to characterize a measurement device, not by a single operator (e.g. a convolution), but by a small dimensional family allowing to model all possible states of the system. To our knowledge, this idea has been very little explored so far and opens many difficult questions: how to evaluate this family experimentally and numerically? How to identify the state of the system from indirect noisy observations? How to exploit this information to reconstruct images in short computing times? We have begun to explore these questions in recent works and wish to continue this effort using tools from optimization, harmonic analysis, probability and statistics, algebraic geometry, machine learning and massively parallel computing. We hope to make significant advances in the field of blind inverse problems. We will validate them on photonic microscopy problems in collaboration with opticians, responsible for two microscopy platforms in Nice and Toulouse. This will allow us to obtain direct feedback for real problems in biology. We will particularly study the problems of super-resolution by single molecule, multi-focal localization and blind structured illumination. Moreover, several companies in the Toulouse area (INNOPSYS, IMACTIV-3D, AGENIUM), will provide us with data from their microscopes (line scanning microscope, light sheet fluorescence microscope), which will ensure direct transfers to industry.
In this project we propose to use the cutting-edge organoid technology to test the toxicity of endocrine disruptors (EDCs) on human organs. The aim is to develop computational tools and models to allow the use of organoid technology for EDC toxicity testing. The project is thus divided in two main objectives: to build up and analyze a phenotypic landscape of EDC effect on organoid and to develop explicative or predictive models for their growth. The first goal is to define and construct a phenotypic map of organoids, modeled as graphs (the nodes representing the cells and edges adjacency between them) for classifying EDCs families. The second is to classify organoid growth trajectories on this map. We will consider two organoid models, gastruloids and prostate organoids. To derive the phenotypic map we will combine a graph representation and a deep learning approach. The deep learning approach will be considered for its discriminating properties whereas a correspondence between the bottleneck layer of the chosen neural network and the stratified graph space will bring some explicability to the derived classification.
This 4-years project started in november 2021 and is leaded by X. Descombes. It involves 3 groups : C3M (S. Clavel, Nice), Metatox, Inserm (X. Coumoul, Paris) and Morpheme.
Successful embryogenesis requires the differentiation of the correct cell types, in defined numbers and in appropriate positions. In most cases, decisions taken by individual cells are instructed by signals emitted by their neighbors. A surprisingly small set of signalling pathways is used for this purpose. The FGF/Ras/ERK pathway is one of these and mutations in some of its individual components cause a class of human developmental syndromes, the RASopathies. Our current knowledge of this pathway is, however, mostly static. We lack an integrated understanding of its spatio-temporal dynamics and we can imperfectly explain its highly non-linear response to a graded increase in input stimulus.
This systems biology project combines advanced quantitative live imaging, pharmacological/optogenetics perturbations and computational modeling to address 3 major unanswered questions, each corresponding to a specific aim:
Through this approach, in a simplified model system, we hope to gain an integrated view of the spatio-temporal dynamics of this pathway and of its robustness to parameter variations. Participants are CRBM (Montpellier), LIRMM (Montpellier), MOSAIC (INRIA Lyon) and Morpheme.
This targeted project, "Filling the gaps between scales to understand biomass properties", is issued from the PEPR B-Best.
The architecture of biomass is highly complex and can be defined as a continuum of length-scales from molecules to particles, including polymers, nano-structures, assemblies, cells, and/or tissues. These scales are strongly interconnected and reflect not only chemical and structural properties of biomass but most importantly their reactivity to transformation processes such as chemical, physical, mechanical or biological reactions.
The goal of this project is to identify and quantify markers at different scales in order to be able to propose a generic model (at least for each biomass type considered) that describes and predict their properties and possibly their reactivity (at the chemical, biological, physical levels), with a focus on lignocellulosic and algal biomass. Morpheme team will address the image analysis issues.
This action gathers the expertise of seven Inria research teams (Aviz, Beagle, Hybrid, Morpheme, Parietal, Serpico and Mosaic) and other groups (MaIAGE, INRAE, Jouy-en-Josas and UMR 144, Institut Curie Paris) and aims at developing original and cutting-edge visualization and navigation methods to assist scientists, enabling semi-automatic analysis, manipulation, and investigation of temporal series of multi-valued volumetric images, with a strong focus on live cell imaging and microscopy application domains. More precisely, the three following challenges will be addressed:
Recent advances in microscope technology provide outstanding images that allow biologists to address fundamental questions. This project aims at developing new AI methods and algorithms for (i) novel acquisition setups for super resolution imaging, and (ii) extraction of valuable quantitative information from these large heterogeneous datasets. More precisely we search for biomarkers in multispectral fluorescence images of tumor tissues to predict the response of immunotherapy in head and neck cancers.
This project is a collaboration with F. Anjuère from IPMC.
In the context of this proposal, our aim is to quantitatively characterize the neurons and macrophage subsets infiltrating CSCC (Cutaneous Squamous Cell Carcinoma) and their reciprocal interactions. To that end, we will combine an innovative high-dimensional imaging approach, the Imaging Mass Cytometry (IMC) technology enabling the simultaneous detection of 39 protein targets on one tissue section, with the development of a computational pipeline for the quantitative analysis of macrophage, neuron and tumor cell subsets and their reciprocal interactions. This pipeline is a key step for the automatic cell and structure segmentation of all IMC images, for the subsequent quantification of cell-cell and cell-structure interactions. By comparing quantitatively and statistically the features of relapsing and non-relapsing CSCC tumors, using a high-dimensional imaging technology combined with an optimized computational pipeline of analysis, we will be able to identify unappreciated spatial features associated to tumor relapse on a unique tumor tissue section.
Article Nice-Matin, December 6th, 2023, Laure Blanc-Féraud.
Interview BFM Côte d'Azur, October 25th, 2023, Laure Blanc-Féraud.