The Spades project-team aims at contributing to meet the challenge of
designing and programming dependable embedded systems in an
increasingly distributed and dynamic context. Specifically, by
exploiting formal methods and techniques, Spades aims to answer three
key questions:
These questions above are not new, but answering them in the context
of modern embedded systems, which are increasingly distributed, open
and dynamic in nature 59, makes them more
pressing and more difficult to address: the targeted system properties
– dynamic modularity, time-predictability, energy efficiency, and
fault-tolerance – are largely antagonistic (e.g., having a highly
dynamic software structure is at variance with ensuring that resource
and behavioral constraints are met). Tackling these questions
together is crucial to address this antagonism, and constitutes a key
point of the Spades research program.
A few remarks are in order:
The SPADES research program is organized around three main themes,
Design and Programming Models, Certified real-time
programming, and Fault management and causal analysis, that
seek to answer the three key questions identified in
Section 2. We plan to do so by developing and/or
building on programming languages and techniques based on formal
methods and formal semantics (hence the use of “sound
programming” in the project-team title). In particular, we seek to
support design where correctness is obtained by construction, relying
on proven tools and verified constructs, with programming languages
and programming abstractions designed with verification in mind.
Work on this theme aims to develop models, languages and tools to support a “correct-by-construction” approach to the development of embedded systems.
On the programming side, we focus on the definition of domain specific programming models and languages supporting static analyses for the computation of precise resource bounds for program executions. We propose dataflow models supporting dynamicity while enjoying effective analyses. In particular, we study parametric extensions and dynamic reconfigurations where properties such as liveness and boundedness remain statically analyzable.
On the design side, we focus on the definition of component-based models
for software architectures combining distribution, dynamicity, real-time and fault-tolerant
aspects.
Component-based construction has long been advocated as a key approach
to the “correct-by-construction” design of complex embedded
systems 48. Witness component-based toolsets such
as Ptolemy 37, BIP 30, or
the modular architecture frameworks used, for instance, in the
automotive industry (AUTOSAR) 28. For building large,
complex systems, a key feature of component-based construction is the
ability to associate with components a set of contracts, which
can be understood as rich behavioral types that can be composed and
verified to guarantee a component assemblage will meet desired
properties.
Formal models for component-based design are an active area of
research. However, we are
still missing a comprehensive formal model and its associated
behavioral theory able to deal at the same time with different
forms of composition, dynamic component structures, and quantitative
constraints (such as timing, fault-tolerance, or energy consumption).
We plan to develop our component theory by progressing on two fronts:
a semantical framework and domain-specific programming models.
The work on the semantical framework should, in the longer term,
provide abstract mathematical models for the more operational and
linguistic analysis afforded by component calculi. Our work on
component theory will find its application in the development of a
Coq-based toolchain for the certified design and construction of
dependable embedded systems, which constitutes our first main
objective for this axis.
Programming real-time systems (i.e., systems whose correct behavior
depends on meeting timing constraints) requires appropriate languages
(as exemplified by the family of synchronous
languages 32), but also the support of
efficient scheduling policies, execution time and schedulability
analyses to guarantee real-time constraints (e.g., deadlines) while
making the most effective use of available (processing, memory, or
networking) resources. Schedulability analysis involves analyzing the
worst-case behavior of real-time tasks under a given scheduling
algorithm and is crucial to guarantee that time constraints are met in
any possible execution of the system. Reactive programming and
real-time scheduling and schedulability for multiprocessor systems are
old subjects, but they are nowhere as mature as their uniprocessor
counterparts, and still feature a number of open research
questions 29, 36, in particular in
relation with mixed criticality systems. The main goal in this theme
is to address several of these open questions.
We intend to focus on two issues: multicriteria scheduling on
multiprocessors, and schedulability analysis for real-time
multiprocessor systems. Beyond real-time aspects, multiprocessor
environments, and multicore ones in particular, are subject to several
constraints in conjunction, typically involving real-time,
reliability and energy-efficiency constraints, making the scheduling
problem more complex for both the offline and the online
cases. Schedulability analysis for multiprocessor systems, in
particular for systems with mixed criticality tasks, is still very
much an open research area.
Distributed reactive programming is rightly singled out as a major open issue in the recent, but heavily biased (it essentially ignores recent research in synchronous and dataflow programming), survey by Bainomugisha et al. 29. For our part, we intend to focus on devising synchronous programming languages for distributed systems and precision-timed architectures.
Managing faults is a clear and present necessity in networked embedded systems. At the hardware level, modern multicore architectures are manufactured using inherently unreliable technologies 33, 46. The evolution of embedded systems towards increasingly distributed architectures highlighted in the introductory section means that dealing with partial failures, as in Web-based distributed systems, becomes an important issue.
In this axis we intend to address the question of how to cope
with faults and failures in embedded systems? We will tackle this
question by exploiting reversible programming models and by developing
techniques for fault ascription and explanation in component-based
systems.
A common theme in this axis is the use and exploitation of causality
information. Causality, i.e., the logical dependence of an effect on a
cause, has long been studied in disciplines such as
philosophy 54, natural sciences,
law 55, and statistics 56, but it
has only recently emerged as an important focus of research in
computer science. The analysis of logical causality has applications
in many areas of computer science. For instance, tracking and
analyzing logical causality between events in the execution of a
concurrent system is required to ensure
reversibility 51, to allow the diagnosis of faults
in a complex concurrent system 47, or to enforce
accountability 50, that is, designing systems in
such a way that it can be determined without ambiguity whether a
required safety or security property has been violated, and why. More
generally, the goal of fault-tolerance can be understood as being to
prevent certain causal chains from occurring by designing systems such
that each causal chain either has its premises outside of the fault
model (e.g., by introducing redundancy 40), or is
broken (e.g., by limiting fault propagation 58).
Our applications are in the embedded system area, typically: transportation, energy production, robotics, telecommunications, the Internet of things (IoT), systems on chip (SoC). In some areas, safety is critical, and motivates the investment in formal methods and techniques for design. But even in less critical contexts, like telecommunications and multimedia, these techniques can be beneficial in improving the efficiency and the quality of designs, as well as the cost of the programming and the validation processes.
Industrial acceptance of formal techniques, as well as their
deployment, goes necessarily through their usability by specialists of
the application domain, rather than of the formal techniques
themselves. Hence, we are looking to propose domain-specific (but
generic) realistic models, validated through experience (e.g., control
tasks systems), based on formal techniques with a high degree of
automation (e.g., synchronous models), and tailored for concrete
functionalities (e.g., code generation).
We also consider the development of formal tools that can certify the result of industrial applications (see e.g., CertiCAN in Sec. 7.2.2).
Regarding applications and case studies with industrial end-users of our techniques, we cooperate with Orange Labs on software architecture for cloud services. We also collaborate with RTaW regarding the integration of our CAN-bus analysis certifier (CertiCAN) in the RTaW-Pegase program suite.
With the help of the GES 1point5 tool we have estimated the direct carbon footprint of our research activities in 2023. Our estimation is based on data gathered in a non-automated manner, as no tool automating the data extraction is available yet.
Professional travels, including the coming of jury members, amount to a total of 4,0 t CO
Our research on certification and fault-tolerance aims at making embedded systems safer. Certified systems tend also to be simpler, less depending on updates and therefore less prone to obsolescence. A potential major application of causality analysis is to help establish liability for accidents caused by software errors.
On the other hand, our research may contribute to make more acceptable or even to promote many problematic systems such as IoT, drones, avionics, autonomous vehicles, ... with a potential negative environmental impact.
Sophie Quinton and Éric Tannier (from the BEAGLE team in Lyon), with the help of many colleagues, including some in the SPADES team, have set up a series of one-day workshops called “Ateliers SEnS” (for Sciences-Environnements-Sociétés), which offer a venue for members of the research community (in particular, but not limited to, researchers) to reflect on the social and environmental implications of their research. More than 50 Ateliers SEnS have taken place so far, all across France and beyond INRIA and the computer science field. Participants to a workshop can replicate it, and quite a few have already done so. Sophie Quinton has facilitated 6 Ateliers SEnS in 2023.
Research into the connection between ICT (Information and Communication Technologies) and the environmental crisis has started in 2020 within the SPADES team, see Section 7.4.
The multiplication of models, languages, APIs and tools for cloud and network configuration management raises heterogeneity issues that can be tackled by introducing a reference model. A reference model provides a common basis for interpretation for various models and languages, and for bridging different APIs and tools. The Cloudnet Computational Model formally specifies, in the Alloy specification language, a reference model for cloud configuration management. The Cloudnet software formally interprets several configuration languages in it, including the TOSCA configuration language, the OpenStack Heat Orchestration Template and the Docker Compose configuration language.
The use of the software shoes, for examples, how the Alloy formalization allowed us to discover several classes of errors in the OpenStack HOT specification.
Application of the Cloudnet model developed by Inria to software network deployment and reconfiguration description languages.
The Cloudnet model allows syntax and type checking for cloud configuration templates as well as their visualization (network diagram, UML deployment diagram). Three languages are addressed for the moment with the modules:
* Cloudnet TOSCA toolbox for TOSCA inncluding NFV description * cloudnet-hot for HOT (Heat Orchestration Template) from OpenStack * cloudnet-compose for Docker Compose
We can use directly the software from an Orange web portal: https://toscatoolbox.orange.com
We have developied a Coq-based framework to formally verify the functional and fault-tolerance properties of circuit transformations. Circuits are described at the gate level using LDDL, a Low-level Dependent Description Language inspired from muFP. Our combinator language, equipped with dependent types, ensures that circuits are well-formed by construction (gates correctly plugged, no dangling wires, no combinational loops, ...). Fault-tolerance techniques can be described as transformations of LDDL circuits.
The framework has been used to prove the correctness of three fault-tolerance techniques for SETs (Single Event Transients): TMR (the classic triple modular redundancy) and two new time redundancy techniques developped within the Spades team: TTR and DTR. More recently, LDDL has been used to prove the correctness of TMR+, a modified TMR able to tolerate SEMTs (Single Event Multiple Transients) a more involved type of faults.
The specifications of the framework (LDDL syntax and semantics, libraries, tactics) are made of 5000 lines of Coq (excluding comments and blank lines). The correctness proofs of fault-tolerance techniques are made of 700 lines of Coq for TMR, 700 for TMR+, 3500 for TTR and 7000 for DTR.
The MASTAG software computes sequential schedules of a task graph or an SDF graph in order to minimize its memory peak.
MASTAG is made of several components: (1) a set of local transformations that compress a task graph while preserving its optimal memory peak, (2) an optimized branch and bound algorithm able to find optimal schedules for medium sized (30-50 nodes) task graphs, (3) support to accommodate SDF graphs in particular, their conversion into task graphs and a suboptimal technique to reduce their size.
MASTAG finds optimal schedules in polynomial time for a wide range of directed acyclic task graphs (DAG), including trees and series-parallel DAG. On classic benchmarks, MASTAG always outperforms the state-of-the-art.
Dataflow Models of Computation (MoCs) are widely used in embedded systems, including multimedia processing, digital signal processing, telecommunications, and automatic control. One of the first and most popular dataflow MoCs, Synchronous Dataflow (SDF), provides static analyses to guarantee boundedness and liveness, which are key properties for embedded systems. However, SDF and most of its variants lack the capability to express the dynamism needed by modern streaming applications.
For many years, the Spades team has been working on more expressive and dynamic models that nevertheless allow the static analyses of boundedness and liveness. We have proposed several parametric dataflow models of computation (MoCs) (SPDF 39 and BPDF 31), we have written a survey providing a comprehensive description of the existing parametric dataflow MoCs 34, we have studied symbolic analyses of dataflow graphs 35 and an original method to deal with lossy communication channels in dataflow graphs 38. We have also proposed the RDF (Reconfigurable Dataflow) MoC 3 which allows dynamic reconfigurations of the topology of the dataflow graphs. RDF extends SDF with transformation rules that specify how the topology and actors of the graph may be dynamically reconfigured. The major feature and advantage of RDF is that it can be statically analyzed to guarantee that all possible graphs generated at runtime will be connected, consistent, and live, which in turn guarantees that they can be executed in bounded time and bounded memory. To the best of our knowledge, RDF is the only dataflow MoC allowing an arbitrary number of topological reconfigurations while remaining statically analyzable.
In 2022, we started an exploratory action (see Section 9.2) to study the potential of dataflow MoCs for the implementation of neural networks. We started by working on the reduction of the memory footprint of tasks graphs scheduled on unicore processors. This is motivated by the fact that some recent neural networks such as GPT-3, seen as tasks graphs, use too much memory and cannot fit on a single GPU.
We have proposed graph transformations that compress the given task graph while preserving its optimal memory peak. We have proved that these transformations always compress Series-Parallel Directed Acyclic Graphs (SP-DAGs) to a single node representing their optimal schedule 18. For graphs that cannot be compressed to a single node, we have designed an optimized branch and bound algorithm able to find optimal schedules for medium sized (30-50 nodes) task graphs. Our approach also applies to SDF graphs after converting them to task graphs. However, since that conversion may produce very large graphs, we also propose a new suboptimal method, similar to Partial Expansion Graphs, to reduce the problem size. We evaluated our approach on classic benchmarks, on which we always outperform the state-of-the-art.
Another technique used by memory greedy neural networks is activity and gradient checkpointing (a.k.a. rematerialization), which recomputes intermediate values rather than keeping them in memory. We are currently studying rematerialization in the more general dataflow framework.
We have published a comprehensive paper about the Affine DataFlow Graph (ADFG) theory and software 13. ADFG synthesizes task periods of real-time embedded applications modeled by SDF graphs. This paper concludes 10 years of work on the ADFG open-source software.
We have applied the ADFG theory to the domain of reconfigurable processors (FPGA) 12. With the help of a few new equations, the theory of ADFG is adapted to minimize the buffer sizes of dataflow applications modeled by SDF graphs and executed on FPGA. This is particularly important for FPGAs which have a limited embedded memory. The corresponding open-source software PREESM is developped at INSA Rennes.
Embedded real-time systems are tightly integrated with their physical environment. Their correctness depends both on the outputs and timeliness of their computations. The increasing use of multi-core processors in such systems is pushing embedded programmers to be parallel programming experts. However, parallel programming is challenging because of the skills, experiences, and knowledge needed to avoid common parallel programming traps and pitfalls. We have proposed the ForeC synchronous multi-threaded programming language for the deterministic, parallel, and reactive programming of embedded multi-cores. The synchronous semantics of ForeC is designed to greatly simplify the understanding and debugging of parallel programs. ForeC ensures that ForeC programs can be compiled efficiently for parallel execution and be amenable to static timing analysis. ForeC's main innovation is its shared variable semantics that provides thread isolation and deterministic thread communication. All ForeC programs are correct by construction and deadlock free because no non-deterministic constructs are needed. We have benchmarked our ForeC compiler with several medium-sized programs (e.g., a
This topic has been a long-run effort, since we started working on ForeC in 2013 in the context of the PhD of Eugene Yip 61. It took time to finalize this work, with the ultimate contribution in 2019 on multi-clock ForeC programs 45, paving the way for the long version article published in 2023 15.
Since 2017 we have been working on a very general model of real-time systems, made of a single-core processor equipped with DVFS and an infinite sequence of preemptive real-time jobs. Each job inter-arrival time between actual size of relative deadline of non-clairvoyant, meaning that, at release time, statistical information on the jobs' characteristics: release time, AET, and relative deadline.
In this context, we have proposed a Markov Decision Process (MDP) solution to compute the optimal online speed policy guaranteeing that each job completes before its deadline and minimizing the energy consumption. To the best of our knowledge, our MDP solution is the first to be optimal. We have also provided counter examples to prove that the two previous state of the art algorithms, namely OA 60 and PACE 52, are both sub-optimal. Finally, we have proposed a new heuristic online speed policy called Expected Load (EL) that incorporates an aggregated term representing the future expected jobs into a speed equation similar to that of OA. A journal paper is currently under review.
Simulations show that our MDP solution outperforms the existing online solutions (OA, PACE, and EL), and can be very attractive in particular when the mean value of the execution time distribution is far from the WCET.
This was the topic of Stephan Plassart's PhD 5741, 43, 42, funded by the Caserm Persyval project, who defended his PhD in June 2020.
We contribute to
Prosa 27, a Coq library of reusable concepts and proofs
for real-time systems analysis. A key scientific challenge is to
achieve a modular structure of proofs, e.g., for response time
analysis. Our goal is to use this library for:
the certification of (results of) existing analysis techniques or tools.
We have developed CertiCAN, a tool produced using the Coq proof assistant, allowing the formal certification of CAN bus analysis results. CertiCAN is able to certify the results of industrial CAN analysis tools, even for large systems. We have described this work in a long journal article 11.
The work on the formalization in Prosa of Compositional Performance Analysis is still ongoing.
Model-Based Diagnosis of discrete event systems (DES) usually aims at
detecting failures and isolating faulty event occurrences based on a
behavioural model of the system and an observable execution log. The
strength of a diagnostic process is to determine what
happened that is consistent with the observations. In order to go a
step further and explain why the observed outcome occurred,
we borrow techniques from causal analysis. We are currently exploring
techniques that are able to extract, from an execution trace, the
causally relevant part for a property violation.
In particular, as part of the SEC project, we are investigating how such techniques can be extended to classes of hybrid systems. As a first result we have studied the problem of explaining faults in real-time systems 53. We have provided a formal definition of causal explanations on dense-time models, based on the well-studied formalisms of timed automata and zone-based abstractions. We have proposed a symbolic formalization to effectively construct such explanations, which we have implemented in a prototype tool. Basically, our explanations identify the parts of a run that move the system closer to the violation of an expected safety property, where safe alternative moves would have been possible.
We have recently generalized the work of 53 and defined robustness functions as a family of mappings from system states to a scalar that, intuitively, associate with each state its distance to the violation of a given safety requirement, e.g., in terms of the remaining number of bad system moves or of the time remaining to react. An explanation then summarizes the portions of the execution on which robustness decreases.
However, as our instantiation of robustness in 53 is defined on a discrete abstraction, robustness may decrease in discrete steps once some timing threshold is crossed, thus exonerating the preceding absence of action. We are currently working on a truly hybrid definition of robustness functions that “anticipate” such thresholds, hence ensuring a smooth decrease indicating early when a dangerous event is approaching.
As part of the DCore project on causal debugging of concurrent programs, the goal of Aurélie Kong Win Chang's PhD thesis is to investigate the use of abstractions to construct causal explanations for Erlang programs. We are interested in developing abstractions that "compose well" with causal analyses, and understanding precisely how explanations found on the abstraction relate to explanations on the concrete system. It is worth noting that the presence of abstraction, which inherently comes with some induction and extrapolation processes, completely recasts the issue of reasoning about causality. Causal traces do no longer describe only potential scenarios in the concrete semantics, but also mix some approximation steps coming from the computation of the abstraction itself. Therefore, not all explanations are replayable counter-examples: they may contain some steps witnessing some lack of accuracy in the analysis. Vice versa, a research question to be addressed is how to define causal analyses that have a well understood behavior under abstraction.
In 19 we have formalized a small step
semantics for a subset of Core Erlang that models, in particular, its
monitoring and signal systems. Having a precise representation of
these aspects is crucial to explain unexpected behaviors such as concurrency bugs stemming from non-determinism in the handling of
messages.
We are currently working on a formalization of an abstract Erlang semantics that allows for a finite abstraction while still accounting for the exchanges of messages and signals between processes.
Concurrent and distributed debugging is a promising application of the notion of reversible computation 44. As part of the ANR DCore project we contribute to the theory behind, and the developoment of the CauDEr reversible debugger for the Erlang programming language and system.
We have continued this year our work on two main themes: studying reversibility for distributed programs in presence of node and link failures with recovery, and studying reversibility for concurrent programs using a shared memory concurrency model.
Concerning reversibility for distributed programs, we have developed a novel process calculus, called D
Concerning reversibility for shared memory concurrency, we have developed a modular operational semantics framework for defining different shared memory concurrency models, including various lock-based weak memory models and transactional memory models. We have proved strong equivalence results between the original formal operational semantics of these different memory models and the operational semantics obtained using our framework. We have also started working on a general theory for reversing synchronization products of transition systems with independence with the hope to directly apply it to our shared memory framework.
Digital circuits are subject to transient faults caused by high-energy particles. As technology scales down, a single particle becomes likely to induce transients faults in several adjacent components. These single-event multiple transients (SEMTs) are becoming a major issue for modern digital circuits.
We have studied how to formalize SEMTs and how the standard triple modular redundancy (TMR) technique can be modified so that, along with some placement constraints, it completely masks SEMTs 25. We specified this technique, denoted by TMR+, as a circuit transformation on the LDDL syntax (see 6.1.3) and the fault models for SEMTs as particular semantics of LDDL. We show that, for any circuit, its transformation by TMR+ masks all faults of the considered SEMT fault model. All this development was formalized in the Coq proof assistant where fault-tolerance properties are expressed and formally proved.
Digital technologies are often presented as a powerful ally in the fight against climate change (see e.g., the discourse around the “convergence of the digital and the ecological transitions”). The SPADES team has started working together on a project proposal to investigate the current role played by ICT in the Anthropocene as well as new approaches to their design. We have identified the following main challenges: How do local measures meant to reduce the environmental impact of ICT relate (or not) to global effects? What can we learn from, and what are the limits of, current quantitative approaches for environmental impact assessment and their use for public debate and policy making? Which criteria could/should we take into account to design more responsible computer systems (other than efficiency, which is already well covered and subject to huge rebound effects in the case of digital technologies)? To come up with a solid research agenda, we are thus studying the state of the art of many new topics 14, including STS (Science and Technology Studies), low tech software and hardware, lifecyle assessment, (digital) commons... A new network of collaborations is also in the making, in particular with colleagues from social sciences. See 23 for a possible topic of interdisciplinary research. Besides, Baptiste de Goër has just started a PhD focusing on how to integrate ICT-related sustainability issues in computer science courses 22.
In the context of Aina Rasoldier's PhD, we have been working on estimating the potential of ridesharing as a solution for reducing the GHG emissions of commuting. Ridesharing is one of the solutions put forward by local authorities to reduce the carbon footprint of individual travel. But it is far from granted that this solution can achieve the long term objectives stated by the French government in its “Stratégie Nationale Bas Carbone”, and declined locally in the “Plan de Déplacements Urbains” of the Grenoble metropolitan area. We have focused on the daily peer-to-peer ridesharing (also called car-pooling), in which people travel using the personal vehicle of one of them. Moreover, ridesharing is prearranged (also called static, or organized) ridesharing, which supposes that people know in advance their travel needs for the entire day and use digital platforms finding a match (i.e., finding passengers when one is driving her/his own car, or finding a car when one is a passenger). We have considered two matching schemes between drivers and passengers: on the one hand identical ridesharing, where drivers and passengers can only carpool if their origins (and destinations) are close, and on the other hand inclusive ridesharing, where passengers can be picked up and dropped off along the driver's route if the passenger's origin and destination are close to the driver's route. In both cases, close refers to a maximal walking distance for the passenger to reach the driver, and to a maximal time between her or his desired starting time and the driver's actual starting time. Our evaluation of the ridesharing potential is based on a synthetic travel demand computed using the existing software from Hörl et al. 49 that we ran on the public data for the Grenoble metropolitan area. Based on this population synthesis, we have developed an ad-hoc matching algorithm to evaluate the maximum potential offered by ridesharing. Extensive simulations performed with our algorithm show that to reach the goals stated in the Grenoble PDU would require at least
DCore is an ANR project between Inria project teams Antique, Focus and Spades, and the Irif lab, running from 2019 to 2024.
The overall objective of the project is to develop a semantically
well-founded, novel form of concurrent debugging, which we call causal debugging, that aims to alleviate the deficiencies of
current debugging techniques for large concurrent software systems.
The causal debugging technology developed by DCore will comprise and
integrate two main novel engines:
LiberAbaci is a project between Inria project teams Cambium, Camus, Gallinette, Spades, Stamp, Toccata, and the Laboratoire d'Informatique de Paris-Nord. The overall objective is to study how one could use the Coq proof assistant in a Mathematical course in the University to help teaching proofs. At Spades, Martin Bodin is working with IREM de Grenoble to involve math teachers and didactic researchers to the project.
The DF4DL action is funded by Inria's DGDS. It aims at exploring the use of the dataflow model of computation to better program deep neural networks.
As a first step, we have studied the problem of minimizing the peak memory requirement for the execution of a dataflow graph. This is of paramount importance for deep neural networks since the largest ones cannot fit on a single core due to their very high memory requirement. We have proposed different techniques in order to find a sequential schedule minimizing the memory peak (see 7.1.1).
Another technique used by memory greedy neural networks is rematerialization which recomputes intermediate values rather than keeping them in memory. We are currently studying rematerialization in the dataflow framework.
The SIA Exploratory Research project, supported by INRIA's DGDS, funds the PhD work of Baptiste de Goër and provides funding for an upcoming postdoctoral fellow in Sciences and Technology Studies.
The goal of the project is to provide interdisciplinary foundations for studying the complex relationship between computer science, information and communication technologies (ICT), society and the environment. We approach the problem from three complementary perspectives: 1) by contributing to an interdisciplinary overview of the state of knowledge on the environmental impacts of ICT; 2) by studying the complex connection between computer science and the Anthropocene through the way it is and could be taught in secondary schools; 3) by exploring, at a local scale, the possibility to deploy frugal or low tech alternatives to existing digital systems, following a participatory approach.