Ctrl-A
is motivated by
the observation that
computing systems, large (data centers) or small (embedded), are more and more required to be adaptive to the dynamical fluctuations of their environments and workloads, evolutions of their computing infrastructures (mobile, shared, or subject to faults), or changes in application modes and functionalities. Their administration, traditionally managed by human system administrators, needs to be automated in order to be efficient, safe and responsive. Autonomic Computing
27
is the approach that emerged in the early 2000's in distributed systems to answer that challenge, in the form
of
feedback loops
for
self-administration control.
These loops
address objectives like
self-configuration (e.g. in service-oriented systems),
self-optimization (resource consumption management e.g., energy),
self-healing (fault-tolerance, resilience),
self-protection (security and privacy).
Therefore, there is a pressing and increasing demand for methods and tools to design controllers for self-adaptive computing systems, that ensure quality and safety of the behavior of the controlled system. The critical importance of the quality of control on performance and safety in automated systems, in computing as elsewhere, calls for a departure from traditional approaches relying on ad hoc techniques, often empirical, unsafe and application-specific solutions.
The main objective of the
Ctrl-A
project-team is to develop a novel framework for model-based design of controllers in Autonomic Computing, exploiting techniques from Control Theory 26, particularly Discrete Event Systems 31, but also other forms.
We want to contribute generic Software Engineering methods and tools for developers to design appropriate controllers for their particular reconfigurable architectures, software or hardware, and integrate them at middleware level.
We want to improve concrete usability of techniques from Control Theory
by specialists of computing systems 7, and to provide tool support for our methods in the form of specification languages and compilers, as well as software architectures.
We address policies for self-configuration, self-optimization (resource management, low power), self-healing (fault tolerance) and self-protection (security).
Our research activity is mainly targeted at models and architectures, with also a notable part devoted to applications and case studies, in co-operations with specialists of the application domains, either academic researchers (e.g. in HPC) or industrial partners (e.g., CEA, Orange labs, in IoT). We adopt a strategy of parallel investigation of, on the one hand, generic models and tools for the design support for control in Autonomic Computing, and, on the other hand, experimental identification of needs and validation of proposals. Therefore we have activities related to several application domains, for each of which we build co-operations with specialists, for example middleware platforms for Cloud systems 3, HPC architectures (e.g., multi-core 11), Dynamic Partial Reconfiguration in FPGA-based hardware 6 and the IoT and smart environments 8.
The main objective of
Ctrl-A
translates into a number of scientific
axes
:
Achieving the goals of
Ctrl-A
requires
multidisciplinarity
and
expertise from several domains.
The expertise in Autonomic Computing and programming languages is covered
internally
by members of the
Ctrl-A
team.
On the side of
theoretical aspects of control,
we have active external collaborations
with researchers specialized in Control Theory,
in the domain of Discrete Event Systems
as well as
in classical, continuous control.
Additionally, an important requirement
for our research to have impact
is to have
access to
concrete, real-world computing systems requiring reconfiguration control.
We target autonomic computing at different scales,
in embedded systems or in cloud infrastructures, which are traditionally different domains.
This is addressed by external collaborations,
with experts in either hardware or software platforms, who are generally missing our competences on model-based control of reconfigurations.
We attack the problem of designing well-regulated and efficient self-adaptive computing systems by the development of novel strategies for their runtime management. Therefore the kind of application domains that we typically target involve computing systems with relatively coarse-grain computation tasks (e.g. image processing or HPC tasks, components or services, control functions in Industrial Control Systems). They must be run on distributed heterogeneous architectures. Runtime, unpredictable variations can come from the environment (e.g., data values, user inputs, physical sensors), the application (e.g., functional modes depending on algorithm progress, computation phases, or business processes), or the infrastructure (e.g., resource overload, faults, temperature variations, communication network variations, cyber-attacks).
The general control problem then consists of deciding at runtime the choice of which implementation or version of tasks to dynamically deploy or redeploy on which computing resources, in order to enforce high-level strategies involving objectives in terms of constraints, optimization, logical invariance or reachability. The design of such controllers involves the design of appropriate sensors and actuators in the computing infrastructures. It is based on the use of modeling and decision formalisms of different kinds according to the application characteristics.
The objectives of Ctrl-A are
achieved and evaluated
in both of our main application domains,
thereby exhibiting their similarities from the point of view of reconfiguration control.
One main application domain for the research of Ctrl-A concerns
Cloud-Edge and High-Performance Computing.
In these contexts,
tasks can be achieved following a choice of implementations or versions,
such as in, e.g.,
service oriented approaches.
Each implementation has its own
characteristics and requirements,
e.g., w.r.t. resources consumed and QoS offered.
The systems execution infrastructures
present
heterogeneity,
with different computing processors,
a variety of peripheral devices (e.g., I/O, video port, accelerators),
and different means of communications.
This hardware or middleware level
also presents adaptation potential, e.g.,
in varying quantities of resources
or sleep and stand-by modes.
The kinds of control problems encountered in these self-adaptive systems concern the navigation in the configurations space defined by choice points at the levels of applications, tasks, and architecture. The pace of control is more sporadic, and slower than the instruction-level computation performance inside the large-grain tasks.
In this application area, we currently focus especially on the runtime management of resources for energy objectives and digital soberness, e.g. at the level of a data-center by dynamically harvesting unused resources, or at node level by dynamically adjusting frequency under QoS constraints. Ongoing or recent cooperations in the application domain feature Qarnot Computing (challenge Inria PULSE), Orange labs, Nokia, Argonne National Laboratories (USA) (JLESC).
Another general application domain to confront our approaches and models is Industrial Control Systems (ICS), which can be seen as a form of Cyber-Physical Systems (CPS) and IoT, more specifically Industry 4.0 related infrastructures, like SCADA. In this application domain we particularly focus on Cyber-Security problems, considered at the operational level, in terms of Intrusion Detection Systems (IDS), as well as reaction to attacks, in the form of self-adaptive resilience and self-protection. In a context of evolution of technologies in ICS, namely their softwarization and virtualization, we also apply our approaches of the Cloud-Edge application domain, e.g. in virtualized control of Smart Grids. The adaptation problems concern both the functional aspects of the applications, and the middleware support deployment and reconfiguration issues.
Ongoing or recent cooperations in the application domain feature Naval Group, CEA, and RTE (the French energy transportion company).
In the year 2023, we are still trying to moderate the travels of the team, and favoring submissions and publications in journals.
Our activities in energy-efficient management of computing infrastructures involve running experiments on large computing infrastructures e.g., using Grid 5000, where we spend approximatively 290k core·hours of computing.
We have research activities w.r.t. energy efficiency in computing systems, at the levels of nodes (RAPL) as well as at the higher level of grids (RJMS, CiGri), which are contributing to a better mastered energy consumption in computing.
On a longer term, we orient our research towards topics explicitely targeting environmental as well as social impacts, ìn the form of user involvement through usage choices. In line with our topic of autonomic management, self-adaptive systems and their control, for example, we consider control objectives involving trade-offs between performance or QoS and economy of resources and impact, so that users can choose a level of sobriety, and possibly limited or degraded quality, thereby allowing for potential resource and energy savings. Our starting cooperation with Qarnot Computing has a potential for involving not only technical considerations but also societal and regulatory constraints, or user and customer choices.
The perspectives involve the notion of computing within limits, especially when they are varying dynamically, and which can be undergone (e.g., resilience when submitted to cyber-attacks or faults) or chosen (e.g., accepting lower quality outside of phases requiring higher levels due to urgency).
Eric Rutten is co-editor, with Sophie Cerf from the Spirals team at Inria Lille and Alessandro Papadopoulos from Mälardalen University (Sweden), for the ACM Transactions on Autonomous and Adaptive Systems (TAAS) special issue on Control for Computing Systems.
Quentin Guilloteau and colleagues designed and proposed a tutorial on Control for Computing, targeted at an audience of Computer Scientists with no background in Control Theory (which is the general case), and made available online : tutorial. This tutorial has been proposed to the public at five occasions in 2023 : sessions.
Heptagon is an experimental language for the implementation of embedded real-time reactive systems. It is developed inside the Synchronics large-scale initiative, in collaboration with Inria Rhones-Alpes. It is essentially a subset of Lucid Synchrone, without type inference, type polymorphism and higher-order. It is thus a Lustre-like language extended with hierchical automata in a form very close to SCADE 6. The intention for making this new language and compiler is to develop new aggressive optimization techniques for sequential C code and compilation methods for generating parallel code for different platforms. This explains much of the simplifications we have made in order to ease the development of compilation techniques.
The current version of the compiler includes the following features: - Inclusion of discrete controller synthesis within the compilation: the language is equipped with a behavioral contract mechanisms, where assumptions can be described, as well as an "enforce" property part. The semantics of this latter is that the property should be enforced by controlling the behaviour of the node equipped with the contract. This property will be enforced by an automatically built controller, which will act on free controllable variables given by the programmer. This extension has been named BZR in previous works. - Expression and compilation of array values with modular memory optimization. The language allows the expression and operations on arrays (access, modification, iterators). With the use of location annotations, the programmer can avoid unnecessary array copies.
We work on the general notion of Software Engineering for designing controllers for Self-Adaptive Systems, and particularly the potential contribution of Control Theory to provide for Assurances in Self-Adaptive Software Systems (book chapter 7). We propose to consider feedback control as a behavioural model-based instanciation of the MAPE-K loop in Autonomic Computing (book chapter 10). We consider that complex systems can require multiple loops, motivated by the fact that different sub-problems can require combinations of different decision and control techniques.
One particularly interesting
topic is the combination of
Control and Machine Learning
.
In our team we propose to address them by
considering
their composition
(particularly Reinforcement and Neural Networks)
with
controllers
based
on Control Theory (particularly deterministic),
in order to
maintain
guarantees
on the behaviors of the managed system.
As a result
we performed a survey of the state of the art in interactions between RL and
deterministic control,
some of them classic, others less explored
13.
This work is done in cooperation with
the Spirals team at Inria Lille :
Sophie Cerf.
Another case of
high potential
is
to consider
the combination of
Control and Scheduling.
In the context of resource harvesting in HPC (see Section 8.2.3),
we start considering
the coordination of a
controller regulating the injection of best-effort jobs
with the OAR scheduler in the RJMS (Resource and Jobs Management System) of CiGri.
This topic was presented at
JLESC 2023 - 15th Workshop of the Joint Laboratory for Extreme Scale Computing
18,
and
has been
the object of
one chapter of
the
PhD of
QuentinGuilloteau
20.
(see Section 8.2.2).
This work is done in cooperation with the Spirals team at Inria Lille : Sophie Cerf, the Datamove team at Inria Grenoble : Olivier Richard, and with Gipsa-lab in Grenoble : Bogdan Robu.
We study the question of multiple loops coordination also from the point of view of Software Architectures, generalizing from the similarities and recurring patterns appearing in use-cases. In the past we have worked in the framework of software components-based approaches (JSS 1,TSE 3) involving proposals for modularity and hierarchy of autonomic discrete controllers. In another series of works, targeting the self-adaptation of reconfigurable hardware, namely DPR FPGA (TECS 2), we considered the management of a combination of mission-level and computing platform-level objectives (CBSE14 6). In other, more applicative work (ICCAC17 33) related to a rule-based middleware (COORD17, 8) , we proposed a design framework for reliable multiple Autonomic Loops, motivated by the management of different functionalities, at different levels of the system, and/or with different decision models. Part of the ideas emerging from that work was followed upon in the different context of Cyber-Physical Systems and the CPS4EU project, where we explore software architectures for self-adaptative middleware support for IoT and CPS. We proposed the separation of concerns between self-adaptation at the different levels of applications or functionality on the one side, and infrastructure and resources on the other side (ECSA20, HICSS22 9, 34).
Recent developments were performed with application to a use case in smart grids, provided by a cooperation with RTE (see Section 8.2.5) 14 (a journal paper is under submission on this topic).
Further developments of these ideas are ongoing, in relation to the notion of computing within limits, where dynamical changes of the limits are reacted unpon by reconfigurations at both levels of redeployment on the current architecture and of reconfigurztion of the application (e.g. in a degraded mode). This research direction is going to be explored further in the context of different projects :
Our work in reactive programming for autonomic computing systems is focused on the specification and compilation of declarative control objectives, under the form of contracts, enforced upon classical mode automata as defined in synchronous languages. The compilation involves a phase of Discrete Controller Synthesis, integrating the tool ReaX, in order to obtain an imperative executable code. The programming language Heptagon / BZR (see Section 7.1.1 ) integrates our research results 5.
Recent work concerns a methodology for the evaluation of controllers. We are considering that Discrete Controller Synthesis produces results that are correct by construction w.r.t. the formal specification, but in practice there remains to evaluate the obtained controller quantitatively, to check e.g., whether it is not overconstrained, and effectively producing the expected impact on the overall system behavior. We consider our work on self-protection (see Section 8.3.2) as a use case, evaluating the improvement of resilience of a system in the presence of attacks.
We used Heptagon/BZR as a simulation tool, to compare a program embedding a synthesized controller, with a similar program either without controller, or with a simple controller programmed manually, without use of discrete controller synthesis. The environment (alarms from an intrusion detection system) has been modeled also in Heptagon/BZR as a Markov chain, that can be simulated with an ad hoc Heptagon library. We then measure several values for each program version: average number of steps before the system get to a “safe” state (state where one remote processing unit do not work anymore because of the attacks), evolution in time of the average number of “programs” in “safe” mode. This evaluation by simulation confirm that the program with the synthesized controller is more efficient w.r.t. these measurements. In some specific cases, we are also able to compare the values obtained by simulation, with theoretical optimal values computed from the Markov chain of the environment. A journal paper is under submission on this topic.
HPC (High-Performance Computing) systems have increasingly become more varying in their behavior, in particular in aspects such as performance and power consumption, thereby encountering problems also known in the Cloud, and the fact that they are becoming less predictable demands more runtime, autonomic management 11. We explore related issues along the following topics.
We explore a form of trade-off between performance and resource and energy consumption, with the aim to sustain performance while reducing energy consumption with a Control Theory approach. The infrastructure is considered at a level close to the hardware, in that we use the RAPL (Running Average Power Limit) mechanism available in Intel processors. We exploit heterogeneity as an opportunity: as applications dynamically undergo variations in workload, due to phases or data/compute movement between devices, one can dynamically adjust power across compute elements to save energy without impacting performance. With an aim toward an autonomous and dynamic power management strategy for current and future HPC architectures, we explore the use of control theory for the design of a dynamic power regulation method, periodically monitoring application progress and choosing at runtime a suitable power cap for processors. Thanks to a preliminary offline identification process, we derive a model of the dynamics of the system and a proportional-integral (PI) controller. We evaluate our approach on top of an existing resource management framework, the Argo Node Resource Manager, deployed on several clusters of Grid’5000, using a standard memory-bound HPC benchmark.
Building upon a methodology and first results (EuroPar21 4), we improved the robustness and reusability of controllers by leveraging adaptive control . New results consist of an approach that incorporates cascaded control strategies, such as PI control and MPC (Model Predictive Control), integrated into the Argo Node Resource Manager framework 25.
A journal paper is under finalization on these topics.
Amongst perspectives, we are considering to use it as a background in our starting research in the WP5 of challenge Inria-Qarnot Computing "PULSE" (see Section 9.1).
This work is done in cooperation with the Spirals team at Inria Lille : Sophie Cerf, with whom we co-advised the MSc internship of Kouds Halitim in the Spirals team at Inria Lille 25.
This work is also done in cooperation with Swann Perarnau (Argonne National Lab., Chicago, IL) in the framework of the JLESC : Joint Laboratory on Extreme Scale Computing (see Section 10.1.1 ).
This resource harvesting problem is found in the context of CiGri, a simple, lightweight, scalable and fault tolerant grid system which exploits the unused resources of a set of computing clusters. CiGri harvests and exploits the unused resources of a set of computing clusters, by injecting best-effort jobs on top of the prioritary applications. We consider autonomic administration for scientific workflows management through a control theoretical approach for maximizing usage while avoiding overload.
We propose a model described by parameters related to the key aspects of the infrastructure thus achieving a deterministic dynamical representation that covers the diverse and time-varying behaviors of the real computing system. We studied simple forms of PI control, as well as adaptive control and an extension with model free control. We first considered essentially the performance of harvesting itself, then integrated the problem of Distributed File Server load, that can heavily disturb prioritary applications. This approach was also the topic of the Master's thesis in Control Theory of Rosa Pagano 23. We performed a comparative study with regard to the reusability of controllers when deployed on varying target platforms, or subjected to varying load patterns. A journal paper is under submission on this topic.
Another result of this activity is the design and implementation of a tutorial on Control for Computing, targeted at an audience of Computer Scientists with no background in Control Theory (which is the general case), and made available online : tutorial. This tutorial has been proposed to the public at several occasions : sessions.
We put an emphasis on reproducibility of experiments, for which new methodologicalresults have been obtained (COMPAS23 17), as well as on frugality with the reduction of the cost of these experiments, with results in Simulating a Multi-Layered Grid Middleware 22, and Folding a Cluster containing a Distributed File-System 21.
This work is the object of the PhD of Quentin Guilloteau 20.
This work is done in cooperation with the Datamove team of Inria/LIG (O. Richard), and Gipsa-lab (B. Robu), and it is the topic of the PhD thesis in Computer Science of Quentin Guilloteau. This work is also done in cooperation with the Spirals team at Inria Lille : Sophie Cerf.
This research topic aims at studying the relationships between scheduling and autonomic computing techniques to manage resources for parallel computing platforms. The performance of such platforms has greatly improved (149 petaflops as of November 2019 32) at the cost of a greater complexity: the platforms now contain several millions of computing units. While these computation units are diverse, one has to consider other constraints such as the amount of free memory, the available bandwidth, or the energetic envelope. The variety of resources to manage builds complexity up on its own. For example, the performance of the platforms depends on the sequencing of the operations, the structure (or lack thereof) of the processed data, or the combination of application running simultaneously.
Scheduling techniques offer great tools to study/guaranty performances of the platforms, but they often rely on complex modeling of the platforms. They furthermore face scaling difficulties to match the complexity of new platforms. Autonomic computing manages the platform during runtime (on-line) in order to respond to the variability. This approach is structured around the concept of feedback loops. The scheduling community has studied techniques relying on autonomic notions, but it has failed to link the notions up.
We are starting to address this topic at the general level of a state of the art of relations between the two domains, and also at the more concrete and specific level of a real-world use-case, in the context of CiGri as above. Indeed this context features a RJMS (Resources and Jobs Management System) involving the OAR scheduler. Therefore we are identifying coordination with the previously described controller and OAR, in particular in such way that OAR is able to notify the controller of upcoming rises or falls of activity in prioritary tasks, and we are exploring how this information can be exploited by the controller, by adopting for example a Feed Forward approach.
This work is done in cooperation with the Datamove team of Inria/LIG (O. Richard), and Gipsa-lab (B. Robu), and it is in the topic of the PhD thesis in Computer Science of Quentin Guilloteau 20.
Amongst perspectives of this topic, we are considering to use it as a background in our starting research in the WP6 of defi Inria-Qarnot Computing "PULSE" (see Section 9.1).
The Fog/Edge computing model has become a popular approach for supporting user services, namely IoT applications. However, automated resource provisioning is needed to cope with workload changes on the fly while meeting service-level agreements. Autonomous computing offers a self-management approach that reduces system complexity and facilitates intuitive service delivery for operators and users. In this topic, we propose an elastic infrastructure solution that leverages adaptive features to handle changing service conditions, such as workload spikes to prevent performance degradation. Our solution integrates a CPLEX-optimized constraint-based model into an autonomous control loop to react to environmental changes and improve the efficiency and agility of the system. We are currently studying the integration to constraints models of dynamical aspect, so that speed or acceleration of variations in the system can be taken into account in the reaction.
A perspective of this topic is to use it as a background in our starting research in the Tasting project of PEPR TASE (see Section 10.2.3).
In this work we consider self-adaptation at the level of Software Architectures, targeted at the domain of Cyber-Physical Systems where Cloud-Edge infrastructures are being adopted in application domains like Smart Grids. This activity took place in the framework of the H2020 project CPS4EU.
As an applicative use-case of our Software Architectures approach from Section 8.1.2, we consider Smart Grid management (HICSS'22 34). We consider self-adaptive security in such Cloud-Edge infrastructures-based CPS. Security risk assessment is an important challenge in the design of Cyber Physical Systems (CPS). Even more importantly, the intrinsically dynamical nature of these systems, due to changes in their environment, as well as evolutions in their infrastructures, makes them self-adaptive systems, where security aspects have to be considered in terms of management of detections and reactions for self-protection. In this work, we propose an approach to autonomously mitigate the threats in each reconfiguration at application or infrastructure levels of CPS. We propose and implement a framework for self-adaptive security : software architecture, design method, and integration with model-based decision. We use Attack-Defense Trees for modeling threats, and our approach involves security risk assessment, taking into account its balancing and coordination with quality of service aspects. We formulate and formalize the on-line decision problem to be solved at each cycle of the self- adaptation control loop in terms of Constraint Programming (CP) modeling and resolution. The CP model implements a set of constraints that allow to specify secure configurations, evaluated regarding their impact on system performance to pinpoint the most relevant one portraying a good balance between the security and quality of service. We perform validation of our approach with its application to Smart Grids, more particularly to an industrial case study from RTE. A journal paper is under submission on this topic.
At a different level, we consider another use-case from RTE, focused on the substation level, under the angle of questions of resilience, seen under the approach of self-adaptation, and more particularly as self-protection in response to attacks of the network. The problem is to allocate and reallocate dynamically a set of control functions upon a distributed computing infrastructure, with self-adaptation to variations and perturbations. We define and implement the decision model using constraint programming, to describe the space of possible configurations, as well as the constraints and objectives formalizing the operators strategies. This model is used in simulation and implementation, calling the constraints solver at each cycle of the self-adaptation control loop. It offers design assistance and rapid prototyping to automation designers, to explore choices of solutions in requirements and strategies 14.
A perspective of this topic is to use it as a background in our starting research in the Tasting project of PEPR TASE (see Section 10.2.3), also in coopération with RTE.
This work is done in cooperation with RTE (the French Energy Transportion company) : Guillaume Giraud.
CTRL-A team is participating in the PEPR Cybersecurity reseach projetc SuperviZ. Stéphane Mocanu is the leader of the Platform workpackage of SuperviZ (Section 10.2.1).
First results on process oriented sequential attacks detection were obtained during Oualid Koucham’s PhD and published recently in 28 together with a general alert correlation framework.
A complete intrusion detection and alert correlation framework was proposed and process oriented IDS and correlator were synthesised, implemented and available in open-source on-line (see Section 7.2 and G-ICS). Smart-grid applications on intrusion detection and impact on dependability were presented in 29.
We further develop the results for distributed and hierarchical systems in the PhD thesis of Estelle Hotellier. Some first results on the attacks on industrial speed driver controlled via CanOpen were presented in August 2021 in the local Barbhack Hacking conference.
We recently extended Zeek IDS detection capabilities to CAN networks and the code will soon be freely available.
A first version was presented in 19 ; a full version appears in 12.
As consequences of attacks on Industrial Control Systems may be dramatic, an important topic in ICS cybersecurity is the improvement of cyber-resilience. Reaction in case of attacks is also a crucial and sensitive topic. Our approach for both resilience and reaction problems is based on the notion of self-protection, where self-adaptation takes the form of self-reconfiguration of the architecture. Based on a first approach developed in the PhD of Kabir-Querrec, and experience on modelling reconfiguration with DES, we formalized recently the self-protection problems as a DES control problems. A model and a formulation of the reconfiguration problem was specified in Heptagon/BZR (IFAC World 2020 conference 24).
We are currently working on a method to evaluate the effectiveness of the obtained controllers related to section 8.1.3.
This is the topic of the PhD thesis in Computer Science of Jolahn Vaudey.
One of our research topics is in automated risk analysis, with the specification of a DSML dedicated to the automated analysis of the security of industrial control systems based on their safety properties. The idea is to extract the devices characteristic and the flow cartography from the configuration files and enrich the model with the description of the network infrastructure and available security measures.
Based on public vulnerability databases, a STRIDE threat model will be automatically constructed and a list of suggested measures proposed. An incipient proof of concept of automatic flow cartography based on configuration files was proposed in the PhD of Maëlle Kabir-Querrec.
Results on extending STRIDE modelling to ICS and automatic generation of attack scenarios where published in 16, 15.
We have a cooperation with Naval Group, around the PhD grant of Estelle Hotellier, on the topic of intrusion detection in complex Industrial Control Systems (ICSs), as described in Section 8.3.1. We are interested in Process-Aware attacks i.e. attacks that target the physical integrity of systems. We consider the hybrid nature of ICSs and our methodology applies for event-driven and continuous dynamical systems. We aim at developing a behavioral network traffic Intrusion Detection System (IDS) based on the ICS characterization through security properties. To do so, we extract system safety properties from standards, devices programs or system specifications and synthesize them into security patterns. These patterns are then monitored by our IDS which is in charge of raising alerts.
We have a cooperation with CEA, around the PhD grant of Mike da Silva, as described in Section 8.3.3. This PhD topic objective is to provide an automatic vulnerability extraction from a security oriented ICS architecture model. Existing modeling languages (SCL for substation and AutomationML for industrial automation) provide support for controller hardware and network accessible data description but not for complete data flow and network infrastructure description nor for vulnerabilities and their effects. We extend existing languages with support for network infrastructure modeling including security controls and data flow description together with a vulnerability data-base support. We will rely on public CVE data bases and an extensive study of industrial protocols formal verification including support for high-availability networks. The results of the automatic architecture model processing is used for threat modeling, attack scenario construction, attack impact assessment and eventually security controls choice assistance.
We have a cooperation with Qarnot computing in the framework of the Inria challenge PULSE, with the support of Ademe, on the topic of pushing carbon-neutral services towards the edge. Particularly, we are involved in WP5 on the Control of emissions of intensive computation tasks, and WP6, which we are coordinating, on the efficient hybridation of heterogeneous computing tasks.
We have a cooperation with RTE (the French Energy Transportion company) : Guillaume Giraud, following our recent work in the H2020 CPS4EU project. It is continuing in the new project Tasting (Section 10.2.3) of the PEPR TASE.
We participate in the JLESC, Joint Laboratory for Extreme Scale Computing, with partners INRIA, the University of Illinois, Argonne National Laboratory, Barcelona Supercomputing Center, Jülich Supercom- puting Centre and RIKEN AICS. We started a cooperation with Argonne National Labs, in the framework of a project on improving the performance and energy efficiency of HPC applications using autonomic computing techniques (see Section 8.2.1).
This work is done in cooperation with the Spirals team at Inria Lille : Sophie Cerf.
We participate in the PEPR Cybersecurity research project SuperviZ.
Stéphane Mocanu is the leader of the Platform workpackage of SuperviZ.
In the framework of the PEPR Cloud, Ctrl-A is participating in the project Taranis (Model, Deploy, Orchestrate, and Optimize Cloud Applications and Infrastructure), particularly in WP 3 : Orchestration of services and ressources.
We will cooperate with teams Spirals at Inria Lille and Stack at Inria in Nantes, on topics amongst : integration of Control and Constraints, and of Control and Scheduling, as model and decision tools in autonomic managers, integration of temporal aspects in reconfiguration management.
In the framework of the PEPR TASE (Technologies Avancées des Systèmes Énergétiques), Ctrl-A is participating in the project Tasting (TrAnsformation of the energy SysTem for a better resilience and flexibility with enhanced digitalization), particularly in :
Ctrl-A participates in the ANR project (in the ANR call : AI computing hardware architectures and accelerators in the context of Edge Computing) called Radyal Resource-Aware DYnamically Adaptable machine Learning, in cooperation with INSA Lyon / LIRIS (Stefan Duffner), the TARAN team, Inria / Irisa, Rennes (Marcello Traiola), and the MODUS team, UGA / GIPSA-lab, Grenoble (Bogdan Robu).
We will work on the analysis of self-adaptationreconfiguration spaces in the dimensions of Application (DNN algorithms), environment (applicative aspects e.g., lighting, obstruction in image analysis), and infrastructure and implementation configuration and deployment (involving hardware with reconfigurable precision and mapping).
Stéphane Mocanu is participating in the steering committee of RESSI (Rendez-Vous de la Recherche et de l'Enseignement de la Sécurité des Systèmes d'Information) Ressi.
Eric Rutten is participating in the steering committee of FETCH (École d’hiver Francophone sur les Technologies de Conception des Systèmes Embarqués Hétérogènes) the Winter School on Heterogeneous Embedded Systems Design Technologies, for the 2023 and 2024 editions Fetch.
Raphaël Bleuse is PC member for IPDPS 2023.
Eric Rutten is IEEE Control Systems Society (CSS) Associate Editor, Technology Conference Editorial Board Technology Conferences Editorial Board (TCEB), and PC member for CCTA 23 and CCTA 24. Eric Rutten is PC member for CPS& IoT 23, PECS 22, CoDIT 23, WODES 24 ; as well as ASMECC Workshop at ACSOS 2023, PECS Workshop at Euro-PAR 2023, and MSR 23.
Eric Rutten and Gwenaël Delaval are reviewers for Ifac World 23.
Raphaël Bleuse is reviewer for CoDIT. 23.
Eric Rutten is co-editor for the ACM Transactions on Autonomous and Adaptive Systems (TAAS) special issue on Control for Computing Systems.
Stéphane Mocanu is reviewer for IEEE Communications Magazine, and for Electronic Research Archive.
Eric Rutten is reviewer for FGCS.
Eric Rutten has been invited at the Velvet Days, of the GDR GPL and the GT YODA, on deployment, reconfiguration, adaptation and DevOps in Nantes, 13-14 dec. 2023.
Raphaël Bleuse is member of the team organizing the LIG keynotes.
Gwenaël Delaval is elected member at the Academic Council (Conseil Académique) of University Grenoble Alpes (UGA) for the Confédération Générale du Travail trade union.
Eric Rutten is a named member of the Scientific Board (Bureau Scientifique) of LIG (Lig). He co- organised the LIG workshop of axes WAX.
Eric Rutten has a mission as Correspondent for Scientific Relations between Inria Grenoble and CEA until june 2023.
Eric Rutten is member of the PhD dissertation committee of Charilaos Skandylas, Linnaeus University, Sweden, August 18th 2023 : Design and Analysis of Self-Protection: Adaptive Security for Software-Intensive Systems.
Eric Rutten is member of the upcoming (2024) PhD dissertation committee of Jeroen Verbakel, Eindhoven University of Technology, The Netherlands.
Eric Rutten is member of the CSI for the PhD of Paul DAOUDI (co-advised by Christophe Prieur and Bogdan Robu, Gipsa-lab).