The Cidre team is interested in with security issues that weaken machines, networks and organizations. Our long-term ambition is to contribute to the construction of widely used systems that are trustworthy and respectful of privacy, even when parts of the system are targeted by attackers.
With this objective in mind, the CIDRE team focuses mainly on the three following topics:
In many aspects of our daily lives, we rely heavily on computer systems, many of which are based on massively interconnected devices that support a population of interacting and cooperating entities. As these systems become more open and complex, accidental and intentional failures become much more frequent and serious. We believe that the purpose of attacks against these systems is expressed at a high level (compromise of sensitive data, unavailability of services). However, these attacks are often carried out at a very low level (exploitation of vulnerabilities by malicious code, hardware attacks).
The CIDRE team is specialized in the defense of computer systems. We argue that to properly protect these systems we must have a complete understanding of the attacker's concrete capabilities. In other words, to defend properly we must understand the attack.
The CIDRE team therefore strives to have a global expertise in information systems: from hardware to distributed architectures. Our objective is to highlight security issues and propose preventive or reactive countermeasures in widely used and privacy-friendly systems.
The fields of application of the Cidre team are naturally system security. The algorithms and tools produced in the team are regularly transferred to the industry through our various collaborations such as Cifre, Start-up or Inria License.
Aimad Berady has received the ”Special Jury Prize at the Prix de la gendarmerie nationale 2023 - Research and Strategic Thinking" for his Ph.D. thesis.
This is the development associated with CSF'23 paper, aiming at proving properties about Kôika circuits.
CSF23: A generic framework to develop and verify security mechanisms at the microarchitectural level: application to control-flow integrity
We have released a dataset containing a red team exercise of 13 participants with the publication 9. The CERBERE projectis both a reproducible attack-defense exercise and a labelled dataset usable for research purposes. The attack-defense exercise is first composed of an exercise for red teamers automatically deployed with variable attack scenarios. Second, an exercise for blue teamers can be operated using the system and network logs generated during the attack phase. We provide with this article, the software to rebuild the infrastructure for red teamers. We share a labelled dataset where we spot the ground truth, i.e. the log lines that have been involved in the attacker’s actionsThe dataset contains system and network logs related to the intrusion of a red teamer attacking a small infrastructure. The originality of the dataset is that all infrastructures contain different vulnerabilities which grealty enrich the dataset in terms of variability. The dataset is available on https://gitlab.inria.fr/cidre-public/cerbere-dataset/
To fully understand various methodologies of cyber attacks, our study is organized with a two-fold focus. On one hand, we are interested in providing security analysts the tools for quickly capturing the knowledge of the scope of an attack in progress. On the other hand, we are interested with investigating new horizons of emerging threats.
Screaming-channel attacks enable Electromagnetic (EM) Side-Channel Attacks (SCAs) at larger distances due to higher EM leakage energies than traditional SCAs, relaxing the requirement of close access to the victim. This attack can be mounted on devices integrating Radio Frequency (RF) modules on the same die as digital circuits, where the RF can unintentionally capture, modulate, amplify, and transmit the leakage along with legitimate signals. Leakage results from digital switching activity, so the hypothesis of previous works was that this leakage would appear at multiples of the digital clock frequency, i.e., harmonics. Our work 14 demonstrates that compromising signals appear not only at the harmonics and that leakage at non-harmonics can be exploited for successful attacks. Indeed, the transformations undergone by the leaked signal are complex due to propagation effects through the substrate and power and ground planes, so the leakage also appears at other frequencies. We first propose two methodologies to locate frequencies that contain leakage and demonstrate that it appears at non-harmonic frequencies. Then, our experimental results show that screaming-channel attacks at non-harmonic frequencies can be as successful as at harmonics when retrieving a 16-byte AES key. As the RF spectrum is polluted by interfering signals, we run experiments and show successful attacks in a more realistic, noisy environment where harmonic frequencies are contaminated by multi-path fading and interference. These attacks at non-harmonic frequencies increase the attack surface by providing attackers with an increased number of potential frequencies where attacks can succeed.
On-board payload data processing can be performed by developing space-qualified heterogeneous Multiprocessor Systemon-Chips (MPSoCs). We present in 19 key compute-intensive payload algorithms, based on a survey with space science researchers, including the two-dimensional Fast Fourier Transform (2-D FFT). Also, we propose to perform design space exploration by combining the roofline performance model with High-Level Synthesis (HLS) for hardware accelerator architecture design. The roofline model visualizes the limits of a given architecture regarding Input/Output (I/O) bandwidth and computational performance, along with the achieved performance for different implementations. HLS is an interesting option in developing FPGA-based onboard processing applications for payload teams that need to adjust architecture specifications through design reviews and have limited expertise in Hardware Description Languages (HDLs). In this paper, we focus on an FPGA-based MPSoC thanks to recently released radiation-hardened heterogeneous embedded platforms.
Analyzing Android applications is essential to review proprietary code and to understand malware behaviors. However, Android applications use obfuscation techniques to slow down this process. These obfuscation techniques are increasingly based on native code. In 6, we propose OATs'inside, a new analysis tool that focuses on high-level behaviors to circumvent native obfuscation techniques transparently. The targeted high-level behaviors are object-level behaviors, i.e., actions performed on Java objects (e.g., field accesses, method calls), regardless of whether these actions are performed using Java or native code. Our system uses a hybrid approach based on dynamic monitoring and trace-based symbolic execution to output control flow graphs (CFGs) for each method of the analyzed application. CFGs are composed of Java-like actions enriched with condition expressions and dataflows between actions, giving an understandable representation of any code, even those fully native. OATs'inside spares users the need to dive into low-level instructions, which are difficult to reverse engineer. We extensively compare OATs'inside functionalities against state-of-the-art tools to highlight the benefit when observing native operations. Our experiments are conducted on a real smartphone: We discuss the performance impact of OATs'inside, and we demonstrate its practical use on applications containing anti-debugging techniques provided by the OWASP foundation. We also evaluate the robustness of OATs'inside using obfuscated unit tests using the Tigress obfuscator.
Malware analysis consists of studying a sample of suspicious code to understand it and producing a representation or explanation of this code that can be used by a human expert or a clustering/classification/detection tool. The analysis can be static (only the code is studied) or dynamic (only the interaction between the code and its host during one or more executions is studied). The quality of the interpretation of a code and its later detection depends on the quality of the information contained in this representation. To date, many analyses produce voluminous reports that are difficult to handle quickly. In 23, we present BAGUETTE, a graph-based representation of the interactions of a sample and the resources offered by the host system during one execution. We explain how BAGUETTE helps automatically search for specific behaviors in a malware database and how it efficiently assists the expert in analyzing samples.
Today, the classification of a file as either benign or malicious is performed by a combination of deterministic indicators (such as antivirus rules), machine learning classifiers, and, more importantly, the judgment of human experts. However, to compare the difference between human and machine intelligence in malware analysis, it is first necessary to understand how human subjects approach malware classification. In this direction, we present in 7 the first experimental study designed to capture which ‘features’ of a suspicious program (e.g., static properties or runtime behaviors) are prioritized for malware classification according to humans and machines intelligence. For this purpose, we created a malware classification game where 110 human players worldwide and with different seniority levels (72 novices and 38 experts) have competed to classify the highest number of unknown samples based on detailed sandbox reports. Surprisingly, we discovered that both experts and novices base their decisions on approximately the same features, even if there are clear differences between the two expertise classes. Furthermore, we implemented two state-of-the-art Machine Learning models for malware classification and evaluated their performances on the same set of samples. The comparative analysis of the results unveiled a common set of features preferred by both Machine Learning models and helped better understand the difference in the feature extraction. This work reflects the difference in the decision-making process of humans and computer algorithms and the different ways they extract information from the same data. Its findings serve multiple purposes, from training better malware analysts to improving feature encoding.
Many studies have proposed machine-learning (ML) models for malware detection and classification, reporting an almost-perfect performance. However, they assemble ground-truth in different ways, use diverse static-and dynamic-analysis techniques for feature extraction, and even differ on what they consider a malware family. As a consequence, our community still lacks an understanding of malware classification results: whether they are tied to the nature and distribution of the collected dataset, to what extent the number of families and samples in the training dataset influence performance, and how well static and dynamic features complement each other. The article 12 sheds light on those open questions by investigating the impact of datasets, features, and classifiers on ML-based malware detection and classification. For this, we collect the largest balanced malware dataset so far with 67k samples from 670 families (100 samples each), and train state-of-the-art models for malware detection and family classification using our dataset. Our results reveal that static features perform better than dynamic features, and that combining both only provides marginal improvement over static features. We discover no correlation between packing and classification accuracy, and that missing behaviors in dynamically-extracted features highly penalise their performance. We also demonstrate how a larger number of families to classify makes the classification harder, while a higher number of samples per family increases accuracy. Finally, we find that models trained on a uniform distribution of samples per family better generalize on unseen data.
Federated learning (FL) enables multiple parties to collaboratively train a machine learning model without sharing their data; rather, they train their own model locally and send updates to a central server for aggregation. Depending on how the data is distributed among the participants, FL can be classified into Horizontal (HFL) and Vertical (VFL). In VFL, the participants share the same set of training instances but only host a different and non-overlapping subset of the whole feature space. Whereas in HFL, each participant shares the same set of features while the training set is split into locally owned training data subsets. VFL is increasingly used in applications like financial fraud detection; nonetheless, very little work has analyzed its security. In 20, we focus on robustness in VFL, in particular, on backdoor attacks, whereby an adversary attempts to manipulate the aggregate model during the training process to trigger misclassifications. Performing backdoor attacks in VFL is more challenging than in HFL, as the adversary i) does not have access to the labels during training and ii) cannot change the labels as she only has access to the feature embeddings. We present a first-of-its-kind clean-label backdoor attack in VFL, which consists of two phases: a label inference and a backdoor phase. We demonstrate the effectiveness of the attack on three different datasets, investigate the factors involved in its success, and discuss countermeasures to mitigate its impact.
When an attacker targets a system, he aims to remain undetected as long as possible. He must therefore avoid performing actions that are characteristic of an identified malicious behavior. One way to avoid detection is to only perform actions on the system that appear legitimate. That is, actions that are allowed because of the system configuration or actions that are possible by diverting the use of legitimate services. In 21, we present and experiment with AWARE (Attacks in Windows Architectures REvealed), a defensive tool able to query a Windows system and build a directed graph highlighting possible stealthy attack paths that an attacker could use during the propagation phase of an attack campaign. These attack paths only rely on legitimate system actions and the use of Living-Off-The-Land binaries. AWARE also proposes a range of corrective measures to prevent these attack paths.
In cybersecurity, CVEs (Common Vulnerabilities and Exposures) are publicly disclosed hardware or software vulnerabilities. These vulnerabilities are documented and listed in the NVD database maintained by the NIST. Knowledge of the CVEs impacting an information system provides a measure of its level of security. In 22 we point out that these vulnerabilities should be described in greater detail to understand how they could be chained together in a complete attack scenario. This article presents the first proposal for the CAPG format, which is a method for representing a CVE vulnerability, a corresponding exploit, and associated attack positions.
The use of machine learning for anomaly detection in cyber security-critical applications, such as intrusion detection systems, has been hindered by the lack of explainability. Without understanding the reason behind anomaly alerts, it is too expensive or impossible for human analysts to verify and identify cyber-attacks. Our research addresses this challenge and focuses on unsupervised network intrusion detection, where only benign network traffic is available for training the detection model. In 18, we propose a novel post-hoc explanation method, called AE-pvalues, which is based on the p-values of the reconstruction errors produced by an Auto-Encoder-based anomaly detection method. Our work identifies the most informative network traffic features associated with an anomaly alert, providing interpretations for the generated alerts. We conduct an empirical study using a large-scale network intrusion dataset, CICIDS2017, to compare the proposed AE-pvalues method with two state-of-the-art baselines applied in the unsupervised anomaly detection task. Our experimental results show that the AE-pvalues method accurately identifies abnormal influential network traffic features. Furthermore, our study demonstrates that the explanation outputs can help identify different types of network attacks in the detected anomalies, enabling human security analysts to understand the root cause of the anomalies and take prompt action to strengthen security measures.
In recent years, the disclosure of several significant security vulnerabilities has revealed the trust put in some presumed security properties of commonplace hardware to be misplaced. We propose to design hardware systems with security mechanisms, together with a formal statement of the security properties obtained, and a machine-checked proof that the hardware security mechanisms indeed implement the sought-for security property. Formally proving security properties about hardware systems might seem prohibitively complex and expensive. In 8, we tackle this concern by designing a realistic and accessible methodology on top of the Kôika Hardware Description Language 27 for specifying and proving security properties during hardware development. Our methodology is centered around a verified compiler from high-level and inefficient to work with Kôika models to an equivalent lower-level representation where side effects are made explicit and reasoning is convenient. We apply this methodology to a concrete example: the formal specification and implementation of a shadow stack mechanism on an RV32I processor. We prove that this security mechanism is correct, i.e., any illegal modification of a return address does indeed result in the termination of the whole system. Furthermore, we show that this modification of the processor does not impact its behaviour in other, unexpected ways.
The recent rise of interest in distributed applications has highlighted the importance of effective information dissemination. The challenge lies in the fact that nodes in a distributed system are not necessarily synchronized, and may fail at any time. This has led to the emergence of randomized rumor spreading protocols, such as push and pull protocols, which have been studied extensively. The
Contrary to intuition, insecure computer network architectures are valuable assets in IT security. Indeed, such architectures (referred to as cyber-ranges) are commonly used to train red teams and test security solutions, in particular the ones related to supervision security. Unfortunately, the design and deployment of these cyber-ranges is costly, as they require designing an attack scenario from scratch and then implementing it in an architecture on a case-by-case basis, through manual choices of machines/users, OS versions, available services and configuration choices. The article 10 presents URSID, a framework for automatic deployment of cyber-ranges based on the formal description of attack scenarios. The scenario is described at the technical attack level according to the MITRE nomenclature, refined into several variations (instances) at the procedural level and then deployed in virtual multiple architectures. URSID thus automates costly manual tasks and allows to have several instances of the same scenario on architectures with different OS, software or account configurations. URSID has been successfully tested in an academic cyber attack and defense training exercise as detailed in Section 10.3.2.
The long-term feasibility of blockchain technology is hindered by the inability of existing blockchain protocols to prune the consensus data leading to constantly growing storage and communication requirements. Kiayias et al. have proposed Non-Interactive-Proofs-of-Proof-of-Works (NIPoPoWs) as a mechanism to reduce the storage and communication complexity of blockchains to O(poly log(n)). However, their protocol is only resilient to an adversary that may control strictly less than a third of the total computational power, which is a reduction from the security guaranteed by Bitcoin and other existing Proof-ofbased blockchains. In 15, we present an improvement to the Kiayias et al. proposal, which is resilient against an adversary that may control less than half of the total computational power while operating in O(poly log(n)) storage and communication complexity. Additionally, we present a novel proof that establishes a lower bound of O(log(n)) on the storage and communication complexity of any PoW-based blockchain protocol.
DGA (2021-2024)
Vincent Raulin’s PhD focuses on using machine learning approaches to boost malware detection/classification based on dynamic analysis traces by extracting feature representations with the knowledge of malware analysis experts. This representation aims at capturing the semantics of the program (i.e., what resources it accesses, what operations it performs on them) in a plateform-independent fashion, by replacing the implementation particularities (system call number 2) with higher-level operation (opening a file). This representation could notably provide semantic explanation of malware activity and deliver explainable malware detection/malware family classification.
AMOSSYS:
Manuel Poisson has started a thesis in collaboration with Amossys. Manuel Poisson is interested in identifying operational attack scenarios in an information system.
ANSSI:
Matthieu Baty started his PhD in October 2020 in the context of a collaboration between Inria and the ANSSI. In this project, we want to formally specify hardware-based security mechanisms of a RISC-V processor to prove that they satisfy a well-defined security policy. In particular, we would like to use the Coq proof assistant to formally specify and verify the processor. Our goal is also to extract an HDL description of that certified processor, that could be used to synthetize the processor on an FPGA board.
ANSSI:
Lucas Aubard started his PhD in October 2022 in the context of a collaboration between Inria and the ANSSI. The objective of this thesis is to improve the existing knowledge on reassembly policies, to design mechanisms to automate IDS configuration and to improve the application of these policies within IDS/IPS to increase their detection capabilities in specific contexts such as cloud computing.
DGA:
Pierre-Victor Besson is financed by a DGA-PEC grant since October 2020. Pierre-Victor Besson works on the automatic generation of attack scenario to design deceptive honeynet.
DGA:
Fanny Dijoud PhD Thesis is funded by a DGA-PEC grant since November 2023. Fanny Dijoud works on system and network supervision through AI-based methods.
Malizen:
Romain Brisse's PdD thesis is financed by Malizen, an Inria start-up from the CIDRE team since January 2021. During the year 2023, Romain has developed a new recommendation system based on the recorded user's actions of blue teamers.
Hackuity:
Natan Talon started his PhD in October 2021 in the context of a collaboration with the company Hackuity. The main objective of this thesis is to be able to assess whether an information system is likely to be vulnerable to an attack. This attack may have been observed in the past or inferred automatically from other attacks.
DGA:
Maxime Lanvin is financed by the DGA through the Pôle d’Excellence Cyber (PEC) since October 2021. Maxime works on behavorial intrusion detection based on machine learning techniques. His work focuses on the analysis of time series to detect APT attacks.
DGA:
Adrien Schoen is financed by the DGA though the Pôle d’Excellence Cyber (PEC) since October 2021. Adrien works on the generation of synthetic network dataset to better evaluate intrusion detection systems. This work is based on various deep learning models such as generative adversarial network and variational auto-encoder.
DGA:
Helene Orsini's PdD thesis is financed by DGA since October 2021. Her thesis project focuses on adversarially robust and interpretable machine learring pipeline for network intrusion detection systems. She will study how to automate the feature engineering phase to extract informative features from non-structured, categorical and imperfect security reports / logs. Furthermore, she will investigate how to make the machine learning pipeline resilient to intentional evading techniques in network intrusion behaviors.
We started, in 2023, the associated team ”SecGen” with two professors at CISPA, Jilles Vreeken and Mario Fritz, on the subject of network traffic generation and network anomaly detection. Machine learning has been successfully applied to intrusion detection, but it needs training data. This training data generally comes from datasets, but their diversity is questionable, and their aging is problematic. Synthetic data generation is a solution to these problems. In the context of SecGen, we hosted a PhD student from CISPA, Joscha Cuppers, for 2 months, and a PhD student of CIDRE, Adrien Schoen, went to CISPA in 2023 for 2 months.
Adrien Schoen stayed at CISPA from October 16th, 2023 to December 15th, 2023 in the team of Jilles Vreeken to work on the topic of generating temporal sequences networks flows. During this stay, he worked with Joscha Cüppers, PhD at CISPA. This visit has led to interesting scientific results that will be submitted to an international venue in 2024.
PEPR DefMal is a collaborative ANR project involving CentraleSupélec, Rennes University, Lorraine University, Sorbonne Paris Nord University, CEA, CNRS, Inria and Eurecom. Malware is affecting government systems, critical infrastructures, businesses, and citizens alike, and regularly makes headlines in the press. Malware extorts money (ransomware), steals data (banking, medical), destroys information systems, or disrupts the operation of industrial systems. The fight against malware is a national and European security issue that requires scientific advances to design new responses and anticipate future attack methods. The aim of the project DefMal is to study malicious programs, whether they are malware, ransomware, botnet, etc. The first objective is to develop new approaches to analyze malicious programs. This objective covers the three aspects of the fight against malware: (i) Understanding (ii) Detection and (iii) Forensics. The second objective of the project is the global understanding of the malware ecosystem (modes of organization, diffusion, etc.) in an interdisciplinary approach involving all the actors concerned.
The security assessment of digital systems relies on compliance and vulnerability analyses to provide recognized cybersecurity assurances. The SECUREVAL project of PEPR Cybersecurity aims to design new tools around new digital technologies to verify the absence of hardware and software vulnerabilities and achieve the required compliance proofs. These developments are based on a double approach, first theoretical and founded on the French school of symbolic reasoning, then applied and anchored in the practice of tool development and security assessment techniques. In addition, by exploring new techniques for security assessments, this project will also allow France to remain at the top of the world in assessment capabilities by anticipating the evolution of international certification schemes. Within this project's framework, our contribution concerns tasks 4.4 Formal analysis and models at the software-hardware boundary (led by Guillaume Hiet) and 3.2 Vulnerability analysis tools in binary codes (led by Frédéric Tronel). Two Ph.D. and one postdoc funded by this project will start between 2023 and 2025.
PEPR SuperviZ is a collaborative ANR project involving CentraleSupélec, Eurecom, Institut Mines-Télécom, Institut Polytechnique de Grenoble, Rennes University, Lorraine University, CEA, CNRS and Inria. The digitalization of all infrastructures makes it almost impossible today to secure all systems a priori, as it is too complex and too expensive. Supervision seeks to reinforce preventive security mechanisms and to compensate for their inadequacies. Supervision is fundamental in the general context of enterprise systems and networks, and is just as important for the security of cyber-physical systems. Indeed, with the ever growing number of connections between objects, the attack surface of systems has become frighteningly wide. This makes security even more difficult to implement. The increase in the number of components to be monitored, as well as the growing heterogeneity of the capacity of these objects in terms of communication, storage and computation, makes security supervision more complex.
PEPR REV is a project about vulnerability research and exploitation. A notable characteristic of complex targets is that they can generally no longer be attacked using a single technique or exploiting a single vulnerability, due to the deployment of numerous protections. For this reason, the REV project is tackling this problematic at multiple levels by addressing all layers, hardware, software and communication interfaces (web and IoT). In this purpose, one of the project's objectives is to combine several tools and approaches simultaneously: for example, memory analysis will benefit from advances in hardware attacks, and will be used to develop exploits. This broad-spectrum analysis is fundamental today: as an illustration, hardware attacks can be combined with software attacks, software attacks can be based on weaknesses in the micro-architecture or require advanced network interactions. Moreover, the impact of attacks and exploits nowadays goes far beyond malicious use, allowing for instance to forensically investigate complex systems such as smartphones. The question also arises from an ethical and legal point of view, and this is a major societal issue: to which extent is it possible to use these techniques, in particular for law enforcement, from an ethical or legal point of view. What is the possible use of these attacks, when should they be corrected ("responsible disclosure") or used, and in what legal framework?
Byblos is a collaborative ANR project involving Rennes university and IRISA (CIDRE and WIDE research teams), Nantes university (GDD research team), and Insa Lyon, LIRIS (DRIM research team). This project aims at overcoming performance and scalability issues of blockchains, that are inherent to the total order that blockchain algorithms seek to achieve in their operations, which implies in turn a Byzantine-tolerant agreement. To overcome these limitations, this project aims at taking a step aside, and exploiting the fact that many applications – including cryptocurrencies – do not require full Byzantine agreement, and can be implemented with much lighter, and hence more scalable and efficient, guarantees. This project further argues that these novel Byzantine-tolerant applications have the potential to power large-scale multi-user online systems, and that in addition to Byzantine Fault Tolerance, these systems should also provide strong privacy protection mechanisms, that are designed from the ground up to exploit implicit synergies with Byzantine mechanisms.
BC4SSI is a JCJC ANR project led by Romaric Ludinard (SOTERN), involving the SOTERN and CIDRE research teams. Self-sovereign identities (SSI) are digital identities that are managed in a decentralized manner. This technology allows users to self-manage their digital identities without depending on third-party providers to store and centrally manage the data, including the creation of new identities. Implementing SSI requires a lot of care since identities are more than simple identifiers: they need to be checked by the service provider via, for instance, verifiable claims. Such requirements make blockchain technology a prime candidate for deploying SSI and storing verifiable claims. BC4SSI aims at studying the weakest synchrony assumptions enabling SSI deployment in a public Blockchain. Among the different existing challenges, BC4SSI will address the following scientific locks: alternatives to PoW security proofs, lightweight replication, scalability and energy consumption.
Priceless is a collaborative CominLabs project involving Rennes University with IRISA (CIDRE and WIDE research teams), and IODE (Institut de l'ouest: droit et Europe), and Nantes university (GDD research team). Promoters of blockchain-based systems such as cryptocurrencies have often advocated for the anonymity these provide as a pledge of privacy protection, and blockchains have consequently been envisioned as a way to safely and securely store data. Unfortunately, the decentralized, fully-replicated and unalterable nature of the blockchain clashes with both French and European legal requirements on the storage of personal data, on several aspects such as the right of rectification and the preservation of consent. This project aims to establish a cross-disciplinary partnership between Computer Science and Law researchers to understand and address the legal and technical challenges associated with data storage in a blockchain context.
In the ANR TrustGW project, we consider a system composed of IoT objects connected to a gateway. This gateway is, in turn, connected to one or more cloud servers. The architecture of the gateway, which is at the heart of the project, is heterogeneous (software-hardware), composed of a baseband processor, an application processor, and hardware accelerators implemented on an FPGA. A hypervisor allows sharing these resources and allocating them to different virtual machines. TrustGW is a collaborative project between the ARCAD team from Lab-STICC, the ASIC team from IETR, and the CIDRE team from IRISA. The project addresses three main challenges: (1) to define a heterogeneous, dynamically configurable and trusted gateway architecture, (2) to propose a trusted hypervisor allowing to deploy virtual machines on a heterogeneous software-hardware architecture with virtualization of the whole resources and (3) to secure the applications running on the gateway. Within this project's framework, the CIDRE team's contribution focuses mainly on the last challenge, particularly through the PhD of Lionel Hemmerlé (2022-2025). Guillaume Hiet is the director of this PhD, co-supervised by Frédéric Tronel, Pierre Wilke and Jean-Christophe Prévotet. We will also explore hardware-assisted Dynamic Information Flow Tracking approaches for hybrid applications, which offload part of their computation to an FPGA.
SCRATCHS is a collaboration between researchers in the fields of formal methods (EPICURE, Inria Rennes), security (CIDRE, CentraleSupélec Rennes), and hardware design (Lab-STICC). Our goal is to co-design a RISC-V processor and a compiler toolchain to ensure by construction that a security-sensitive code is immune to timing side-channel attacks while running at maximal speed. We claim that a co-design is essential for end-to-end security: cooperation between the compiler and hardware is necessary to avoid time leaks due to the micro-architecture with minimal overhead. In the context of this project, Guillaume Hiet is the director of the Ph.D. of Jean-Loup Houdot, co-supervised by Pierre Wilke and Frederic Besson, on security-enhancing compilation against side-channel attacks.
Anatolii Khalin started in November as a post-doctoral researcher in the team, co-supervized with the AUT team from IETR. His work focuses on detecting cyberattacks that could target a cyberphysical system. In particular, smart buildings taking autonomous decisions about energy production and consumption could be the target of an attacker. We plan to design new estimators used to predict the different physical measures of a smart building. These estimators could be used to raise alerts when a deviation from the expected prediction is detected, for example, because of a compromised device in the building.
Guillaume Hiet was the General Chair of the SILM 2023 workshop, co-localized with IEEE Euro S&P
Ludovic Mé was a member of the organizing committee of JSI 2023 (Journées Scientifiques Inria, Bordeaux, August 30th to September 1rst) and of the 8th Franco-Japanese Cybersecurity Workshop (Bordeaux, November 29th to December 1st, 2023). He also served the steering committee of RESSI (Rendez-Vous de la Recherche et de l'Enseignement de la Sécurité des Systèmes d'Information).
Michel Hurfin served as reviewer for the conference Sirocco 2023.
Jean-Francois Lalande was part of the editorial board of IARIA International Journal on Advances in Security.
Ludovic Mé was panelist for a round table organized by EDIH Bretagne and dedicated to the role of research in such a program (Nov. 2023, 22nd).
Ludovic Mé gave an invited talk on offensive aspects of AI at the CESIN congress (Dec. 2023, 6th).
Pierre-François Gimenez was a panelist for a round table at the event "La Cyber au rendez-vous de l’IA de confiance" organized by the PTCC at Campus Cyber (Jun. 2023, 20th)
Ludovic Mé serves:
Guillaume Hiet is the co-chair of the Systems, Software and Network Security working group of the GDR Sécurité Informatique.
Jean-Francois Lalande was a reviewer for the PhD grants of Normandie University.
Valérie Viet Triem Tong was vice-President of the ANR project evaluation committee: Specific Topics in Artificial Intelligence (TSIA) CyberSecurity
Valérie Viet Triem Tong chaired the recruitment committee (selection and audition) for the Nancy researchers' recruitment process (CRCN and ISFP).
Several team members are involved in initial and continuing education in CentraleSupélec, a French institute of research and higher education in engineering and science, ESIR (Ecole Supérieure d'Ingénieur de Rennes) the graduate engineering school of the University of Rennes 1.
In these institutions,
The teaching duties are summed up in table 1.
Ludovic Mé was member of the PhD committee for the following PhD theses:
Jean-Francois Lalande was
Guillaume Hiet was
Valérie Viet Triem Tong was
Pierre-François Gimenez was
On the Youtube page of the CIDRE team, many scientific talks are published. Most of them are recordings from the biweekly CIDRE seminars organized by Pierre-François Gimenez. In 2023, the channel reached 121 subscribers, and 48 videos were published, with about 5812 views.
Valérie Viet Triem Tong and Jean-Louis Lanet (previous member of CIDRE and now retired) published in 2023 an article ”Virus numériques” in La Recherche, a monthly French language popular science magazine.