At the frontier between integrative and computational neuroscience, we
propose to model the brain as a system of active memories in synergy and in
interaction with the internal and external world and to simulate it as
a whole and in situation.
In integrative and cognitive neuroscience (cf. § 3.1), on the
basis of current knowledge and experimental data, we develop models of
the main cerebral structures, taking a specific care of the kind of
mnemonic function they implement and of their interface with other
cerebral and external structures. Then, in a systemic approach, we
build the main behavioral loops involving these cerebral structures,
connecting a wide spectrum of actions to various kinds of
sensations. We observe at the behavioral level the properties emerging
from the interaction between these loops.
We claim that this approach is particularly fruitful for investigating
cerebral structures like the basal ganglia and the prefrontal cortex,
difficult to comprehend today because of the rich and multimodal
information flows they integrate. We expect to cope with the high
complexity of such systems, inspired by behavioral and developmental
sciences, explaining how behavioral loops gradually incorporate
in the system various kinds of information
and associated mnesic representations.
As a consequence, the
underlying cognitive architecture, emerging from the interplay between
these sensations-actions loops, results from a mnemonic synergy.
In computational neuroscience (cf. § 3.2), we concentrate
on the efficiency of local mechanisms and on the effectiveness of the
distributed computations at the level of the system. We also take care
of the analysis of their dynamic properties, at different time
scales. These fundamental properties are of high importance to allow
the deployment of very large systems and their simulation in a
framework of high performance computing
Running
simulations at a large scale is particularly interesting to evaluate
over a long period a consistent and relatively complete network of cerebral
structures in realistic interaction with the external and internal
world. We face this problem in the domain of autonomous robotics
(cf. § 3.4) and ensure a real autonomy by the design of an
artificial physiology and convenient learning protocoles.
We are convinced that this original approach also permits to revisit
and enrich algorithms and methodologies in machine learning (cf. § 3.3) and in autonomous
robotics
(cf. § 3.4), in addition to elaborate
hypotheses to be tested in neuroscience and medicine, while offering to
these latter domains a new ground of experimentation similar to their
daily experimental studies.
The human brain is often considered as the most complex system dedicated to information processing. This multi-scale complexity, described from the metabolic to the network level, is particularly studied in integrative neuroscience, the goal of which is to explain how cognitive functions (ranging from sensorimotor coordination to executive functions) emerge from (are the result of the interaction of) distributed and adaptive computations of processing units, displayed along neural structures and information flows. Indeed, beyond the astounding complexity reported in physiological studies, integrative neuroscience aims at extracting, in simplifying models, regularities at various levels of description. From a mesoscopic point of view, most neuronal structures (and particularly some of primary importance like the cortex, cerebellum, striatum, hippocampus) can be described through a regular organization of information flows and homogenous learning rules, whatever the nature of the processed information. From a macroscopic point of view, the arrangement in space of neuronal structures within the cerebral architecture also obeys a functional logic, the sketch of which is captured in models describing the main information flows in the brain, the corresponding loops built in interaction with the external and internal (bodily and hormonal) world and the developmental steps leading to the acquisition of elementary sensorimotor skills up to the most complex executive functions.
In summary, integrative neuroscience builds, on an overwhelming quantity of data, a simplifying and interpretative grid suggesting homogenous local computations and a structured and logical plan for the development of cognitive functions. They arise from interactions and information exchange between neuronal structures and the external and internal world and also within the network of structures.
This domain is today very active and stimulating because it proposes,
of course at the price of simplifications, global views of cerebral
functioning and more local hypotheses on the role of subsets of
neuronal structures in cognition. In the global approaches, the
integration of data from experimental psychology and clinical studies
leads to an overview of the brain as a set of interacting memories,
each devoted to a specific kind of information processing
56. It results also in longstanding and very ambitious
studies for the design of cognitive architectures aiming at embracing
the whole cognition. With the notable exception of works initiated by
47, most of these frameworks (e.g. Soar, ACT-R),
though sometimes justified on biological grounds, do not go up to a
connectionist neuronal implementation. Furthermore, because of
the complexity of the resulting frameworks, they are restricted to
simple symbolic interfaces with the internal and external world and to
(relatively) small-sized internal structures. Our main research
objective is undoubtly to build such a general purpose cognitive
architecture (to model the brain as a whole in a systemic way),
using a connectionist implementation and able to cope with a realistic
environment.
From a general point of view, computational neuroscience can be defined as the development of methods from computer science and applied mathematics, to explore more technically and theoretically the relations between structures and functions in the brain 59, 44. During the recent years this domain has gained an increasing interest in neuroscience and has become an essential tool for scientific developments in most fields in neuroscience, from the molecule to the system. In this view, all the objectives of our team can be described as possible progresses in computational neuroscience. Accordingly, it can be underlined that the systemic view that we promote can offer original contributions in the sense that, whereas most classical models in computational neuroscience focus on the better understanding of the structure/function relationship for isolated specific structures, we aim at exploring synergies between structures. Consequently, we target interfaces and interplay between heterogenous modes of computing, which is rarely addressed in classical computational neuroscience.
We also insist on another aspect of computational neuroscience which is, in our opinion, at the core of the involvement of computer scientists and mathematicians in the domain and on which we think we could particularly contribute. Indeed, we think that our primary abilities in numerical sciences imply that our developments are characterized above all by the effectiveness of the corresponding computations: we provide biologically inspired architectures with effective computational properties, such as robustness to noise, self-organization, on-line learning. We more generally underline the requirement that our models must also mimick biology through its most general law of homeostasis and self-adaptability in an unknown and changing environment. This means that we propose to numerically experiment such models and thus provide effective methods to falsify them.
Here, computational neuroscience means mimicking original computations made by the neuronal substratum and mastering their corresponding properties: computations are distributed and adaptive; they are performed without an homonculus or any central clock. Numerical schemes developed for distributed dynamical systems and algorithms elaborated for distributed computations are of central interest here 41, 46 and were the basis for several contributions in our group 57, 55, 60. Ensuring such a rigor in the computations associated to our systemic and large scale approach is of central importance.
Equally important is the choice for the formalism of computation, extensively discussed in the connectionist domain. Spiking neurons are today widely recognized of central interest to study synchronization mechanisms and neuronal coupling at the microscopic level 42; the associated formalism 45 can be possibly considered for local studies or for relating our results with this important domain in connectionism. Nevertheless, we remain mainly at the mesoscopic level of modeling, the level of the neuronal population, and consequently interested in the formalism developed for dynamic neural fields 40, that demonstrated a richness of behavior 43 adapted to the kind of phenomena we wish to manipulate at this level of description. Our group has a long experience in the study and adaptation of the properties of neural fields 55, 54 and their use for observing the emergence of typical cortical properties 52. In the envisioned development of more complex architectures and interplay between structures, the exploration of mathematical properties such as stability and boundedness and the observation of emerging phenomena is one important objective. This objective is also associated with that of capitalizing our experience and promoting good practices in our software production.
In summary, we think that this systemic approach also brings to computational neuroscience new case studies where heterogenous and adaptive models with various time scales and parameters have to be considered jointly to obtain a mastered substratum of computation. This is particularly critical for large scale deployments.
The adaptive properties of the nervous system are certainly among its most fascinating characteristics, with a high impact on our cognitive functions. Accordingly, machine learning is a domain 53 that aims at giving such characteristics to artificial systems, using a mathematical framework (probabilities, statistics, data analysis, etc.). Some of its most famous algorithms are directly inspired from neuroscience, at different levels. Connectionist learning algorithms implement, in various neuronal architectures, weight update rules, generally derived from the hebbian rule, performing non supervised (e.g. Kohonen self-organizing maps), supervised (e.g. layered perceptrons) or associative (e.g. Hopfield recurrent network) learning. Other algorithms, not necessarily connectionist, perform other kinds of learning, like reinforcement learning. Machine learning is a very mature domain today and all these algorithms have been extensively studied, at both the theoretical and practical levels, with much success. They have also been related to many functions (in the living and artificial domains) like discrimination, categorisation, sensorimotor coordination, planning, etc. and several neuronal structures have been proposed as the substratum for these kinds of learning 51, 48. Nevertheless, we believe that, as for previous models, machine learning algorithms remain isolated tools, whereas our systemic approach can bring original views on these problems.
At the cognitive level, most of the problems we face do not rely on only one kind of learning and require instead skills that have to be learned in preliminary steps. That is the reason why cognitive architectures are often referred to as systems of memory, communicating and sharing information for problem solving. Instead of the classical view in machine learning of a flat architecture, a more complex network of modules must be considered here, as it is the case in the domain of deep learning. In addition, our systemic approach brings the question of incrementally building such a system, with a clear inspiration from developmental sciences. In this perspective, modules can generate internal signals corresponding to internal goals, predictions, error signals, able to supervise the learning of other modules (possibly endowed with a different learning rule), supposed to become autonomous after an instructing period. A typical example is that of episodic learning (in the hippocampus), storing declarative memory about a collection of past episods and supervising the training of a procedural memory in the cortex.
At the behavioral level, as mentionned above, our systemic approach underlines the fundamental links between the adaptive system and the internal and external world. The internal world includes proprioception and interoception, giving information about the body and its needs for integrity and other fundamental programs. The external world includes physical laws that have to be learned and possibly intelligent agents for more complex interactions. Both involve sensors and actuators that are the interfaces with these worlds and close the loops. Within this rich picture, machine learning generally selects one situation that defines useful sensors and actuators and a corpus with properly segmented data and time, and builds a specific architecture and its corresponding criteria to be satisfied. In our approach however, the first question to be raised is to discover what is the goal, where attention must be focused on and which previous skills must be exploited, with the help of a dynamic architecture and possibly other partners. In this domain, the behavioral and the developmental sciences, observing how and along which stages an agent learns, are of great help to bring some structure to this high dimensional problem.
At the implementation level, this analysis opens many fundamental challenges, hardly considered in machine learning : stability must be preserved despite on-line continuous learning; criteria to be satisfied often refer to behavioral and global measurements but they must be translated to control the local circuit level; in an incremental or developmental approach, how will the development of new functions preserve the integrity and stability of others? In addition, this continous re-arrangement is supposed to involve several kinds of learning, at different time scales (from msec to years in humans) and to interfer with other phenomena like variability and meta-plasticity.
In summary,
our main objective in machine learning is to propose on-line learning systems, where several modes of learning have to collaborate and where the protocoles of training are realistic.
We promote here a really autonomous learning, where
the agent must select by itself internal resources (and build them if
not available) to evolve at the best in an unknown world, without the
help of any deus-ex-machina to define parameters, build corpus and
define training sessions, as it is generally the case in machine
learning. To that end, autonomous robotics (cf. § 3.4) is a
perfect testbed.
Autonomous robots are not only convenient platforms to implement our
algorithms; the choice of such platforms is also motivated by theories in
cognitive science and neuroscience indicating that cognition emerges
from interactions of the body in direct loops with the world (embodiment of cognition49). In addition to real
robotic platforms, software implementations of autonomous robotic systems
including components dedicated to their body and their
environment will be also possibly exploited, considering that they are
also a tool for studying conditions for a real autonomous learning.
A real autonomy can be obtained only if the robot is able to define its goal by itself, without the specification of any high level and abstract cost function or rewarding state. To ensure such a capability, we propose to endow the robot with an artificial physiology, corresponding to perceive some kind of pain and pleasure. It may consequently discriminate internal and external goals (or situations to be avoided). This will mimick circuits related to fundamental needs (e.g. hunger and thirst) and to the preservation of bodily integrity. An important objective is to show that more abstract planning capabilities can arise from these basic goals.
A real autonomy with an on-line continuous learning as described in § 3.3 will be made possible by the elaboration of protocols of learning, as it is the case, in animal conditioning, for experimental studies where performance on a task can be obtained only after a shaping in increasingly complex tasks. Similarly, developmental sciences can teach us about the ordered elaboration of skills and their association in more complex schemes. An important challenge here is to translate these hints at the level of the cerebral architecture.
As a whole, autonomous robotics permits to assess the consistency of our models in realistic condition of use and offers to our colleagues in behavioral sciences an object of study and comparison, regarding behavioral dynamics emerging from interactions with the environment, also observable at the neuronal level.
In summary, our main contribution in autonomous robotics is to make autonomy possible, by various means corresponding to endow robots with an artificial physiology, to give instructions in a natural and incremental way and to prioritize the synergy between reactive and robust schemes over complex planning structures.
Modeling the brain to emulate cognitive functions offers direct and indirect application domains. Our models are designed to be confronted to the reality of life sciences and to make predictions in neuroscience and in the medical domain. Our models also have an impact in digital sciences; their performances can be questioned in informatics, their algorithms can be compared with models in machine learning and artificial intelligence, their behavior can be analysed in human-robot interaction. But since what they produce is related to human thinking and behavior, applications will be also possible in various domains of social sciences and humanities.
One of the most original specificity of our team is that it is part of a laboratory in Neuroscience (with a large spectrum of activity from the molecule to the behavior), focused on neurodegenerative diseases and consequently working in tight collaboration with the medical domain. Beyond data and signal analysis where our expertise in machine learning may be possibly useful, our interactions are mainly centered on the exploitation of our models. They will be classically regarded as a way to validate biological assumptions and to generate new hypotheses to be investigated in the living. Our macroscopic models and their implementation in autonomous robots will allow an analysis at the behavioral level and will propose a systemic framework, the interpretation of which will meet aetiological analysis in the medical domain and interpretation of intelligent behavior in cognitive neuroscience and related domains like for example educational science.
The study of neurodegenerative diseases is targeted because they match the phenomena we model. Particularly, the Parkinson disease results from the death of dopaminergic cells in the basal ganglia, one of the main systems that we are modeling. The Alzheimer disease also results from the loss of neurons, in several cortical and extracortical regions. The variety of these regions, together with large mnesic and cognitive deficits, require a systemic view of the cerebral architecture and associated functions, very consistent with our approach.
Of course, digital sciences are also impacted by our researches, at several levels. At a global level, we will propose new control architectures aimed at providing a higher degree of autonomy to robots, as well as machine learning algorithms working in more realistic environment. More specifically, our focus on some cognitive functions in closed loop with a real environment will address currently open problems. This is obviously the case for planning and decision making; this is particularly the case for the domain of affective computing, since motivational characteristics arising from the design of an artificial physiology allow to consider not only cold rational cognition but also hot emotional cognition. The association of both kinds of cognition is undoubtly an innovative way to create more realistic intelligent systems but also to elaborate more natural interfaces between these systems and human users.
At last, we think that our activities in well-founded distributed computations and high performance computing are not just intended to help us design large scale systems. We also think that we are working here at the core of informatics and, accordingly, that we could transfer some fundamental results in this domain.
Because we model specific aspects of cognition such as learning, language and decision, our models could be directly analysed from the perspective of educational sciences, linguistics, economy, philosophy and ethics.
Futhermore, our implication in science outreach actions, including computer science teaching in secondary and primary school, with the will to analyse and evaluate the outcomes of these actions, is at the origin of building a link between our research in computational learning and human learning, providing not only tools but also new modeling paradigms.
As part of the Institute of Neurodegenerative Diseases that developed a strong commitment to the environment, we take our share in the reduction of our carbon footprint by deciding to reduce our commuting footprint and the number of yearly travels to conference.
We're engaged in the EcoMob regional project in collaboration with the University of Bordeaux and the University of La Rochelle to study and model the behavior of individuals during their daily trips to and from work places. In this context and based on our previous work on decision making, our team is interested in elucidating how habits are formed and more importantly, how can they be changed.
N.P. Rougier co-funded the national network of reproducible research (with S.Cohen-Boulakia, F.Lemoine and A.Legrand) and co-organized the kick-off conference "Recherche Reproductible: état des lieux" at Institut Pasteur.
Reservoirs Computing is based on random Recurrent Neural Networks (RNNs). ESNs are a particular kind of networks with or without leaking neurons. The computing principle can be seen as a temporal SVM (Support Vector Machine): random projections are used to make dimensionality expansion of the inputs. The input stream is projected to a random recurrent layer and a linear output layer (called "read-out") is modified by learning. This training is often done offline, but can also be done in an online fashion.
Compared to other RNNs, the input layer and the recurrent layer (called "reservoir") do not need to be trained. For other RNNs, the structure of the recurrent layer evolves in most cases by gradient descent algorithms like Backpropagation-Through-Time, which is not biologically plausible and is adapted iteratively to be able to hold a representaion of the input sequence. In contrast, the random weights of the ESN's reservoir are not trained, but are often adapted to possess the "Echo State Property" (ESP) or at least suitable dynamics to generalize. The reservoir activities include non-linear transformations of the inputs that are then exploited by a linear layer. The states of the reservoir can be mapped to the output layer by a computationally cheap linear regression. The weights of the input and recurrent layer can be scaled depending on the task at hand: these are considered as hyperparameters (i.e. parameters which are not learned) along with the leaking rate (or time constant) of neurons and the random matrix densities.
ReservoirPy enables the fast and efficient training of artificial recurrent neural networks.
This library provides implementations and tools for the Reservoir Computing paradigm: a way of training Recurrent Neural Networks without training all the weights, by using random projections. ReservoirPy provides an implementation only relying on general scientific librairies like Numpy and Scipy, in order to be more versatile than specific frameworks (e.g. TensorFlow, PyTorch) and provide more flexibilty to build custom architectures. It includes useful and advanced features to train reservoirs. ReservoirPy especially focuses on the Echo State Networks flavour, based on average firing rate neurons with tanh (hyperbolic tangent) activation function.
Reservoirs Computing is based on random Recurrent Neural Networks (RNNs). The computing principle can be seen as a temporal SVM (Support Vector Machine): random projections are used to make dimensionality expansion of the inputs towards a non-linear high-dimensional space. The input stream is projected to a random recurrent layer and a (often) linear output layer (called "read-out") is modified by learning. This training is often done offline, but can also be done in an online fashion.
Compared to other RNNs, the input layer and the recurrent layer (called "reservoir") do not need to be trained. For other RNNs, the structure of the recurrent layer are often modified by gradient descent algorithms like Backpropagation-Through-Time (BPTT). This more classical kind of learning is not biologically plausible and often needs to see the training data several time (i.e. for several epochs), whereas with Reservoir Computing training data are used once usually. In contrast, the random weights of the ESN's reservoir are not trained, but are often adapted to possess the "Echo State Property" (ESP) or at least suitable dynamics to generalize. In addition, sparse matrices are often used for these random matrices. Overall, this greatly speeds up the learning process and enables online learning, which is an advantage in many applications.
The reservoir activities include non-linear transformations of the inputs that are then exploited by a linear layer. The states of the reservoir can be mapped to the output layer by a computationally cheap linear regression. The weights of the input and recurrent layer can be scaled depending on the task at hand: these are considered as hyperparameters (i.e. parameters which are not learned) along with the leaking rate (or time constant) of neurons.
This library includes
- the preliminary implementation of metrizable symbolic data structure allowing performing symbolic derivations using numerical embedding, in an explicitly (thus easily explainable) way, targeting reinforcement symbolic learning or open-ended creative complex problem-solving.
- a set of C/C++ routines for basic calculations, with the portions of code executed on connected objects which allow measurement of learning traces, and the control of experiments,
- C/C ++ or Javascript tools to interface the different software modules used, and a Python wrapper to develop above these functionalities.
This year we have addressed several important questions related to our scientific positioning. Central to this positioning, we have studied and modeled bio-inspired learning mechanisms and collaborative mnesic functions
(cf. § 8.2). We have also studied higher cognitive functions, related to cognitive control (cf. § 8.3) and have also considered how important characteristics can be associated to this framework, like symbolic abstract knowledge (cf. § 8.4), and oscillations (cf. § 8.5). Endly, we have also pursued our work on language processing in birds and robots (cf. § 8.6).
Within the development of the ReservoirPy library, we have released various versions from 0.3.6 to 0.3.10. We presented Reservoir Computing principles and the ReservoirPy library at AI4industry 2023 workshop (Jan23, Bordeaux), at ECML-PKDD "Tutorial on Sustainable Deep Learning for Time-series" (Sept23, Turin, Italy), at University of Californie Los Angeles invited by A. Warlaumont (Nov23, USA), at "5ème Rencontres Chercheur·euse·s et Ingénieur·e·s" at Institut Henri Poincaré invited by Phimeca company (Nov23, Paris) and also for internal (Mnemosyne Green Days, Oct23, Lacanau) and project meetings (DFKI-Inria NEARBY project, Dec23, virtual). ReservoirPy was used by two projects at the Hack1robo hackathon (Jun23, Bordeaux). We made a coding "sprint" at PyCon conference (Feb23, Bordeaux) 17. A collaboration with an original application to COVID prediction is in progress regarding the interface to the library in R language that was jointly developed with the SISTM team and called reservoirR.
In a collaboration with C. Moulin-Frier et al. (Inria-Flowers team) we explored a new way to combine Reservoir Computing with Reinforcement Learning (RL). The general aim was to model to adaptive abilities of animals to their environments. It explores the interplay between evolutionary and developmental processes using meta reinforcement learning. In the study 26 we evolved reservoirs, focusing on their hyperparameters rather than weight values. These reservoirs are then used within a Reinforcement Learning framework to learn behavioral policies. The study tests the model in various simulated environments (MuJoCo), examining its ability to handle tasks with partial observability, learn locomotion, and generalize behaviors to new tasks. The results indicate that evolving reservoirs can enhance reinforcement learning in diverse challenging tasks, in particular using reservoirs enable to accelerate the training convergence of the PPO (Proximal Policy Optimisation) RL algorithm.
In cognitive control, the working memory in the prefrontal cortex and the episodic memory in the hippocampus play a major role in the definition of flexible contextual rules that can replace the dominant behavior. This year, we have concluded two important doctoral works related to this topic: Snigha Dagar has proposed a bio-informed model of the prefrontal cortex, able to learn and manipulate abstract and concrete rules and has defended her work in April 21. Hugo Chateau-Laurent will defend his thesis in February next year and is considering the role of the hippocampus in episodic memory and in cognitive control.
Within the AIDE AEx (cf. § 10.3.4), we carried on introducing the idea of a symbolic description of a complex human learning task, in order to contribute to better understanding how we learn, in the very precise framework of a task, named #CreaCube, related to initiation to computational thinking presented as an open-ended problem, which involves solving a problem and appealing to creativity 31, and also to participate in the experimental design and analysis 27, 34. We also proposed to map an ontology onto a SPA-based architecture with a preliminary partial implementation into spiking neural networks in order to provide an effective link between symbolic presentation of information and biologically plausible numerical implementation. We also still work on making explicit how a reinforcement learning paradigm can be applied to a symbolic representation of a concrete problem-solving task, modeled here by an ontology, for instance with applications to robotics. This work is embedded in a strong collaboration with education science collaborators 18 working on computational thinking initiation and computer science tools in education with a multi-disciplinary vision of cognitive function modeling.
This year, we carried on studying the neural oscillations involved in cognition.
As part of the PhD project of Nikolaos Vardalakis, we kept on developing and studying our detailed model of the hippocampal formation and its interactions with the medial septum (modeled as Kuramoto oscillators). In particular, we showed how electrical stimulation can help restore theta-nested gamma oscillations, characteristics of memory encoding, and how the timing of this stimulation (relative to the phase of the theta rhythm) is critical to the restoration of gamma activity in the case of impaired theta phase reset between the hippocampus and the medial septum.
We started a project of modeling the prefrontal cortex (PFC) with an MSc student by reproducing an existing computational model of this area, which we aim to couple with our hippocampus model in order to have a global model of the brain areas involved in memory and the oscillations supporting their interactions. We are currently preparing a publication in the journal "ReScience C" about the replication of the computational model of the PFC, while the larger PFC-hippocampus project has been submitted to the ANR JCJC call (AAPG 2024).
At the microscopic scale, as part of the PhD project of Maeva Andriantsoamberomanga, we are currently investigating the effects of extracellular electrical stimulation on hippocampal pyramidal cells, and in particular we are studying the influence of electrode placement on the recruitment of action potentials using very detailed multicompartmental neuron models capturing the geometry of hippocampal neurons.
Finally, we studied a different cognitive process, namely attention, and showed in (7) that the frequency of neural oscillations in a computational model of the fronto-parietal network is critical to obtain high sensitivity of target detection in a visual attention task.
We pursue our research on the understanding of how children acquire language through noisy supervision and model how their brain could process language with the little information available. We take the perspective of a learning agent by focusing on robotic corpora, in order to integrate the "Grounding Problem" (a question also important for other topics of the team cf 8.4).
In particular, the Reservoir Computing paradigm enables to have a diversity of computations available "right at the start of learning". This is like if the child or agent would have computation (nearly) for since the beginning to bootstrap the developmental learning. In this perspective, it seems obvious that it can enable a child or an agent to learn more quickly than with a classical learning algorithm (has we have shown previously comparing reservoirs and LSTM). This year, we demonstrated this within a new perspective with the study of Leger et al. 26 by combining this paradigm with meta-reinforcement learning 8.2.
Moreover, we have pushed forward the studies related to "brain encoding" for brain imaging data (fMRI, MEG, etc.) of participants listening to or reading stories (said to be more ecological conditions than previous imagery experiments). From the activations of a language model, processing the same stimuli as the participant, we predict the imaging data (fMRI/MEG) from participant, synchronising points in time. There is a vast literature on linguistic brain encoding for functional MRI (fMRI) related to syntactic and semantic representations. Magnetoencephalography (MEG), with higher temporal resolution than fMRI, enables us to look more precisely at the timing of linguistic feature processing. Inspired by previous fMRI studies, we studied 13, 33 MEG brain encoding using basic syntactic and semantic features, with various context lengths and directions (past vs. future), for a dataset of 8 subjects listening to stories. We find that BERT representations predict MEG significantly but not other syntactic features or word embeddings (e.g. GloVe), allowing us to encode MEG in a distributed way across auditory and language regions in time. In particular, past context is crucial in obtaining significant results. In following experiment 32, we investigate if speech models outperform language models during speech-evoked brain activity, if LM and SM combine new word representation with previous context, and what are time of information is shared between them. Moreover, we made a review paper 29. In this survey, first discuss popular representations of language, vision and speech stimuli, and present a summary of neuroscience datasets. Further, we review popular deep learning based encoding and decoding architectures and note their benefits and limitations. Finally, we will conclude with a brief summary and discussion about future trends.
X. Hinaut had a contract with the Lyre lab of Suez company (Pessac) to supervise a master student on a study using reservoir computing.
This project is a convergence point between past research approaches toward new computational paradigms (adaptive reconfigurable architecture, cellular computing, computational neuroscience, and neuromorphic hardware):
This project represents a significant step toward the definition of a true fine-grained distributed, adaptive and decentralized neural computation framework. Using self-organized neural populations onto a cellular machine where local routing resources are not separated from computational resources, it will ensure natural scalability and adaptability as well as a better performance/power consumption tradeoff compared to other conventional embedded solutions.
Language involves several abstraction levels of hierarchy. Most models focus on a particular level of abstraction making them unable to model bottom-up and top-down processes. Moreover, we do not know how the brain grounds symbols to perceptions and how these symbols emerge throughout development. Experimental evidence suggests that perception and action shape one-another (e.g. motor areas activated during speech perception) but the precise mechanisms involved in this action-perception shaping at various levels of abstraction are still largely unknown. The PI proposes to create a new generation of neural-based computational models of language processing and production: i.e. to (1) use biologically plausible learning mechanisms; (2) create novel sensorimotor mechanisms to account for action-perception shaping; (3) build hierarchical models from sensorimotor to sentence level; (4) embody such models in robots in order to ground semantics.
The project will last four years (2022-2025). We regularly discuss with our colleague from the University of Bordeaux (Gaël Jobard).
In the wake of the emergence of large-scale language models such as ChatGPT, the BrainGPT project is at the forefront of research in Artificial Intelligence and Computational Neuroscience. While these models are remarkably efficient, they do not reflect how our brain processes and learns language. BrainGPT takes up the challenge by focusing on the development of models more faithful to human cognitive functioning, inspired by data from brain activity during listening or reading. The ambition is to create more efficient models, less reliant on intensive computations and massive volumes of data. BrainGPT will open new perspectives on our understanding of language and cognition.
The project will last four years (2023-2027).
The modelling and assessment of computational thinking (CT) skills is a challenge that has a major impact on how learning activities are integrated into the curricula of OECD countries, particularly in terms of equal opportunities. The Artificial Intelligence Devoted to Education (AIDE) Inria exploratory action (AEx)aims to help address this challenge in an innovative way by modelling computational thinking through a neuro-inspired cognitive model, allowing analysis of the learner engaged in learning activities.
It's an exploratory subject, finishing this year. We are taking the scientific risk of looking at things differently, and had the chance to generate a few interesting outcomes. For example, instead of using the so-called artificial intelligence mechanisms to try to make "assistants", i.e., algorithms to better learn, we start focusing on how formalisms from the field of "artificial intelligence" (numerical and symbolic) contribute to better understand how we learn. But it is also a research with applications. Our hope is to contribute to the reduction of educational inequalities and improve school perseverance, focusing on transversal competencies, also called 21st century competencies which include computer thinking. More details on our activities here and a public presentation here.
The University of Bordeaux has labeled one of our activities as an interdisciplinary and exploratory research project. In collaboration with university partners in the field of law, the aim of this project is to understand the changes in society imposed by the development of digital surveillance technologies in a democratic context and to organize seminars and general public conferences to disseminate this information.
The Hyperhum@in research program brings together a core group of researchers in HSS and life sciences committed to questioning exploratory engineering projects “at the frontiers of the human”. The second part, entitled “brain-machine: analogy, model, identity”, proposes to take a combined look at Artificial Intelligence and cognitive neuroscience which have today become inseparable in their mutual quest for intelligibility of the functioning of the human brain.
The University of Bordeaux has labeled this project as an interdisciplinary and exploratory research project.
The RT-HippoNeuroStim project aims at translating the hippocampal model previously developed by A. Aussel, together with Fabien Wagner (IMN), onto the new neuromorphic computing architecture developed by the team of Timothée Levi at the IMS. This architecture is based on Field Programmable Gate Arrays (FPGA) and is much more efficient than current simulation software. We will leverage this platform to simulate the activity of the hippocampus in real time, which will greatly accelerate research on hippocampal neurostimulation.
Project gathering researchers from: University of La Rochelle (Cerege lab in social sciences and L3I lab in computer science); University of Bordeaux (IRGO lab in organisation management); Town and suburbs of La Rochelle.
The goal of this project was to study and model user urban mobility behaviours in an eco-responsibility context. Our team was in charge of studying models of decision in such complex contexts, in interaction with teams in social sciences aiming at influencing user behaviours.
The project ended this year with a workshop organized in La Rochelle on October 26th, with scientific talks in the morning and open sessions in the afternoon, to disseminate our results to a wider audience, including public and private stakeholders.
Project gathering researchers from: MSH Lorraine (USR3261), InterPsy (EA 4432), APEMAC, EPSaM (EA4360), Archives Henri-Poincaré (UMR7117), Loria (UMR7503) and Mnemosyne.
PsyPhiNe is a pluridisciplinary and exploratory project between philosophers, psychologists, neuroscientists and computer scientists. The goal of the project is to explore cognition and behavior from different perspectives. The project aims at exploring the idea of assignments of intelligence or intentionality, assuming that our intersubjectivity and our natural tendency to anthropomorphize play a central role: we project onto others parts of our own cognition. To test these hypotheses, we ran a series of experiments with human subject confronted to amotorized lamp that can or cannot interact with them while they’re doing a specific task.
We are members of three Regional Research networks, devoted to Artificial Intelligence, Robotics and Computational Education.
We are members of two Networks of Research of the University of Bordeaux: PHDS (Public Health Data Science) and RobSys (Robustness of Autonomous Systems).
F. Alexandre was in charge of the scientific organization of the one-week workshop AI for Industry AI4I’23 including 400 attendees on january 16-20, with teaching in the morning and hands-on experiments on industrial applications in the afternoon.
F. Hyseni, N.P. Rougier, C. Mercier and N. Trouvain co-organized the first Open Science workshop at Bordeaux Neurocampus.
N.P. Rougier co-organized the International workshop “Software, Pillar of Open Science”.
N.P. Rougier co-organized Recherche Reproductible: état des lieux at Institut Pasteur and co-funded the "Réseau français de la recherche reproductible"
X. Hinaut co-organized the ECML-PKDD "Tutorial on Sustainable Deep Learning for Time-series" (Sept23, Turin, Italy).
X. Hinaut and N. Trouvain organized a coding "sprint" at PyCon conference (Feb23, Bordeaux) 17.
X. Hinaut co-organised the first edition of Hack1robo hackathon.
F. Alexandre: ACAIN23, DATAQUITAINE23, ICANN23;
X. Hinaut: Drôles d'objets 2023;
N.P. Rougier: CogSci23, ICANN23, Drôles d'objets 2023;
A. Aussel has been elected as a member of the program committee for the Organization for Computational Neurosciences (OCNS) for three years, and as such has helped select keynote and oral talks and reviewed abstract submissions for the CNS conference in July 2023 in Leipzig.
F. Alexandre: reviewer for ICANN23, TAIMA23;
X. Hinaut: reviewer for SFN23, CogSci23, Drôles d'Objets;
N.P.Rougier for ICANN23, CogSci23, Bench 2023;
X. Hinaut: reviewer for Neural Networks;
F. Alexandre is Academic Editor for PLOS ONE; Review Editor for Frontiers in Neurorobotics; member of the editorial board of Cognitive Neurodynamics.
N.P.Rougier is Editor-in-Chief for ReScience C and ReScience X, associate editor for PeerJ Computer Science, Review Editor for Frontiers in Robotics, Frontiers in Decision Making.
A. Aussel has been a reviewer for the journal PNAS.
F. Alexandre: talk about Cognitive Control to the interdisciplinary seminar in Cognitive Informatics of the University of Québec in Montréal; Webinar to the European Digital Innovation Hub DIHNAMIC about ChatGPT. Talks given to companies (Dassault Systems, LVMH) and Socio-Economic actors (Ministry of Defence) about Artificial Intelligence.
N.P.Rougier has been invited to the University of Lancaster , colloquium of the Mathematics Institute of the University of Potsdam, SOFT Days (Bordeaux), European College of Sport Science conference (Paris), Hacking Cognition (Paris), Huawei Technical Summit (Helsinki), Inria (Nancy).
X. Hinaut was invited to give a talk to the interdisciplinary seminar in Cognitive Informatics of the University of Québec in Montréal by S. Harnad (Jan23, Canada, remotly); X. Hinaut was invited to give a tutorial on Reservoir Computing at AI4industry workshop (Jan23, Bordeaux, FR). X. Hinaut was invited to give a talk at IIIT Hyderabad (Jan23, India). X. Hinaut gave a tutorial on ReservoirPy at the ECML-PKDD "Tutorial on Sustainable Deep Learning for Time-series" (Sept23, Turin, Italy). X. Hinaut was invited to give a talk at SCSNL, Stanford (Nov23, Palo Alto, USA). X. Hinaut was invited at University of California Los Angeles by A. Warlaumont (Nov23, LA, USA) both to give a scientific talk and a tutorial on Reservoir Computing and ReservoirPy. P. Bernard (and X. Hinaut for preparation) were invited to present Reservoir Computing and ReservoirPy "5ème Rencontres Chercheur·euse·s et Ingénieur·e·s" at Institut Henri Poincaré invited by Phimeca company (Nov23, Paris, FR).
F. Alexandre was auditioned on December 12th at the French National Assembly by the OPECST (Office Parlementaire d'Evaluation des Choix Scientifiques et Technologiques), about Artificial Intelligence; he was also an expert for the FRQNT (Fonds de Recherche du Québec Nature et Technologies) and for the ANID (Agencia Nacional de Investigacion y Desarrollo) in Chile.
N.P. Rougier is member of the national network of Open Science experts, member of the software college for the national committee for Open Science, Open Science expert for SwissUniversities and AI expert for Sorbonne university;
F. Alexandre is member of the steering committee of Inria Bordeaux Sud-Ouest Project Committee; member of the Inria International Chairs committee; corresponding member for Inria Bordeaux Sud- Ouest of the Inria Operational Committee for the assesment of Legal and Ethical risks; He was also member of the Program Committee of the yearly Inria Scientific days (JSI'23);
N.P. Rougier is the corresponding member for Inria Bordeaux Sud-Ouest on scientific edition, head of the Computational Neuroscience team at Institute of Neurodegenerative Diseases.
X.Hinaut is member of the “Committee for Technological Development”(CDT), the "Committeefor Research Jobs" (CER) of Inria Bordeaux Sud-Ouest, and addressee of the PlaFRIM high-performance computing cluster. He is also chair of IEEE Task Forces (TF) about: "Reservoir Computing" (co-chair), "Cognitive and Developmental Systems Technical Committee": "Language and Cognition" (main chair) and is also member of IEEE TF "Action and Perception". He is co-Head of the "Apprentissage et Neurosciences pour la Robotique" (GT8) CNRS Robotics Working Group. He manages a WP in the PHDS Impulsion Bordeaux network.
Many courses are given in french universities and schools of engineers at different levels (LMD) by most team members, in computer science, in applied mathematics, in neuroscience and in cognitive science.
F. Alexandre: two article for The Conversation, about word embedding and about ChatGPT; One article in Le Figaro Newspaper about Artificial Intelligence; Contribution to the online Blog Binaire, of newspaper Le Monde about the interdisciplinary book.
F. Alexandre: two talks given at the regional Maison de la Science to introduce Artificial Intelligence and ChatGPT to 25 high-school teachers. Training of high-school teachers in mathematics of the Department Les Landes about ChatGPT, on Oct 11th in Mont de Marsan.
X. Hinaut gave a talk on songbirds at the interdisciplinary seminar of Inria, Bordeaux. X. Hinaut (helped by F. Alexandre for preparation) gave a talk on AI done at the interdisciplinary seminar of Inria, Bordeaux.
H. Chateau-Laurent and X. Hinaut made an interactive performances twice in the year. First, during the "Semaine du Cerveau" (Mar23) they made a conference to more than hundred high-school students arguing that science and art could intertwine; doing scientific presentations, musical performances and some student could interact with an optimisation tool (HyperOpt) 50 to optimise musical patterns during the final musical performance. Then, they did a more advanced interaction with an adult public at the "Drôles d’Objets" conference (Nancy, May23). They first invited the public to interact with a visual interface throught a MIDI keyboard to visualize the latent space of a Generative Adversarial Network producing birdsong sounds. Once the participants selected four canary syllables, they could select the rhythm at which they should be played. This was done through an intuitive interface easing the use of Euclidian rhythms 58.
X. Hinaut with colleagues from Inria and Bordeaux Univeristy organised the first edition of Hack1robo hackathon. Hack1Robo focusing on areas such as AI, robotics, and cognitive sciences. Held from June 2-4, 2023, at the Cap Sciences FabLab, the event aimed to mediate and disseminate research knowledge, making technology accessible. Open to anyone with relevant knowledge or skills, the 2023 edition welcomed 27 participants. It sought to spark interest and potential vocations in students from the Bordeaux academic ecosystem, linking them with Inria's and the University's research labs. Two promising projects from the 2023 edition were chosen for the Hackatech hackathon in November 2023, aiming to develop commercializable technology.