Ubiquitous Computing refers to the situation in which computing facilities are embedded or integrated into everyday objects and activities. Networks are large-scale, including both hardware devices and software agents. The systems are highly mobile and dynamic: programs or devices may move and often execute in networks owned and operated by others; new devices or software pieces may be added; the operating environment or the software requirements may change. The systems are also heterogeneous and open: the pieces that form a system may be quite different from each other, built by different people or industries, even using different infrastructures or programming languages; the constituents of a system only have a partial knowledge of the overall system, and may only know, or be aware of, a subset of the entities that operate on the system.
A prominent recent phenomenon in Computer Science is the emergence of
interaction and communication as key architectural and programming
concepts. This is especially visible in ubiquitous systems.
Complex distributed
systems are being thought of and designed as structured composition of
computational units, usually referred to as components. These
components are supposed to interact with each other and such
interactions are supposed to be orchestrated into conversations and
dialogues. In the remainder, we will write CBUS for
Component-Based Ubiquitous Systems.
In CBUS, the systems are complex. In the same way as for complex systems in other disciplines, such as physics, economics, biology, in CBUS theories are needed that allow us to understand the systems, to design or program them, and to analyze them.
Focus investigates the semantic foundations for CBUS. The foundations are intended as instrumental to formalizing and verifying important computational properties of the systems, as well as to proposing linguistic constructs for them. Prototypes are developed to test the implementability and usability of the models and the techniques. Throughout our work, `interaction' and `component' are central concepts.
The members of the project have a solid experience in algebraic and logical models of computation, and related techniques, and this is the basis for our study of ubiquitous systems. The use of foundational models inevitably leads to opportunities for developing the foundational models themselves, with particular interest for issues of expressiveness and for the transplant of concepts or techniques from a model to another one.
The objective of Focus is to develop concepts, techniques, and
possibly also tools, that may contribute to the analysis and synthesis
of CBUS.
Fundamental to these activities is modeling.
Therefore designing, developing and studying computational models
appropriate for CBUS is a central activity of the project.
The models are used
to formalize and verify important computational
properties of the systems, as well as to propose new linguistic constructs.
The models we study are in the process
calculi (e.g., the
Modern distributed
systems have witnessed a clear shift
towards interaction and conversations
as
basic building blocks
for software architects and programmers.
The systems are made by components, that
are supposed to
interact and carry out
dialogues in order to achieve
some predefined goal; Web services are a good example of this.
Process calculi are models that have been designed precisely with the
goal
of understanding interaction and composition.
The theory and tools that have been developed on top of process
calculi can set a basis with which CBUS challenges can be tackled.
Indeed, industrial proposals of languages for Web services such as BPEL (Business Process Execution Language)
are strongly inspired by process calculi, notably the
Type systems and logics for reasoning on computations are among the
most successful outcomes in the history of the research in
A number of elegant and powerful results have been obtained
in implicit computational complexity
concerning the
The main application domain for Focus are ubiquitous systems, i.e. systems whose distinctive features are: mobility, high dynamicity, heterogeneity, variable availability (the availability of services offered by the constituent parts of a system may fluctuate, and similarly the guarantees offered by single components may not be the same all the time), open-endedness, complexity (the systems are made by a large number of components, with sophisticated architectural structures). In Focus we are particularly interested in the following aspects.
Today the component-based methodology often refers to Service Oriented Computing. This is a specialized form of component-based approach. According to W3C, a service-oriented architecture is “a set of components which can be invoked, and whose interface descriptions can be published and discovered”. In the early days of Service Oriented Computing, the term “services” was strictly related to that of Web Services. Nowadays, it has a much broader meaning as exemplified by the XaaS (everything as a service) paradigm: based on modern virtualization technologies, Cloud computing offers the possibility to build sophisticated service systems on virtualized infrastructures accessible from everywhere and from any kind of computing device. Such infrastructures are usually examples of sophisticated service oriented architectures that, differently from traditional service systems, should also be capable to elastically adapt on demand to the user requests.
Release 1.11 includes almost 500 commits and 14 different contributors and it is one of the biggest releases so far!
The main changes include: - New call-expressions. You can now call solicit-response operations in expressions, for example (in the condition of a conditional): `if( trim@stringUtils( "Hello " ) == "Hello" ) println@console( "Hooray!" )()` - New if-expressions, for example: `x = if( condition ) expr1 else expr2.` - More modern alternative throw syntax: `throw fault( data )`. - New engine for error reporting. For example, when you misspell a keyword or a type name, Jolie now looks at the context and tries to guess which word you were trying to type by using edit distance. - New templating engine for HTTP. - Improved management of open channels. Jolie services with many interactions with the same clients/services now consume noticeably fewer connections and RAM. - New standard library operations, including a library for assertions, a library for vector manipulation, operations for URL encoding and decoding, an operation for string interpolation, and support for reading lines synchronously from the terminal. - Support for Mustache templates (see packages/mustache.ol). - Improved tracing messages for protocol internals. - Many bugfixes to HTTP, SOAP, XML, and JSON data handling. - is_defined now correctly returns true for defined variables of type void.
Eco-imp was originally envisaged as a cost analyzer for probabilistic and non-deterministic imperative programs. Particularly, it features dedicated support for sampling from distributions, and can thereby accurately reason about the average case complexity of randomized algorithms, in a fully automatic fashion. The tool is based on an adaptation of the ert-calculus of Kaminski et al., extended to the more general setting of cost analysis where the programmer is free to specify a (non-uniform) cost measure on programs. The main distinctive feature of eco-imp, though, is the combination of this calculus with an expected value analysis. This provides the glue to analyze program components in complete independence, that is, the analysis is modular and thus scalable.
In its most recent version, the tool has been completely overhauled. On the one hand, it is capable of reasoning about a broader set of events, through the analysis of expected outcomes. On the other hand, the new version also supports recursive procedures.
The adoption of edge/fog systems and the introduction of privacy-preserving regulations compel the usage of tools for expressing complex data queries in an ephemeral way—ensuring the queried data does not persist.
Database engines partially address this need, as they provide domain-specific languages for querying data. Unfortunately, using a database in an ephemeral setting has inessential issues related to throughput bottlenecks, scalability, dependency management, and security (e.g., query injection). Moreover, databases can impose specific data structures and data formats, which can hinder the development of microservice architectures that integrate heterogeneous systems and handle semi-structured data.
Tquery is the first query framework designed for ephemeral data handling in microservices. Tquery joins the benefits of a technology-agnostic, microservice-oriented programming language, Jolie, and of one of the most widely-used query languages for semi-structured data in microservices, the MongoDB aggregation framework. With Tquery, users express in a terse syntax how to collect data from heterogeneous sources and how to query it in local memory, defining pipelines of high-level operators. The development of Tquery follows a "cleanroom software engineering process", based on the definition of a theory for querying semi-structured data compatible with Jolie and inspired by a consistent variant of the key operators of the MongoDB aggregation framework.
Tquery is a query framework integrated into the Jolie language for the data handling/querying of Jolie trees.
Tquery is based on a tree-based instantiation (language and semantics) of MQuery, a formalisation of a sound fragment of the Aggregation Framework, the query language of the most popular document-oriented database: MongoDB.
Tree-shaped documents are the main format in which data flows within modern digital systems - e.g., eHealth, the Internet-of-Things, and Edge Computing. Tquery is particularly suited to develop real-time, ephemeral scenarios, where data shall not persist in the system.
Stable release of Tquery, which includes the following operators: Match, Unwind, Project, Group, Join, and Pipeline.
In 2023 there were no updates.
Serverless computing is a Cloud development paradigm where developers write and compose stateless functions, abstracting from their deployment and scaling.
APP is a declarative language of Allocation Priority Policies to specify policies that inform the scheduling of Serverless function execution to optimise their performance against some user-defined goals.
APP is currently implemented as a prototype extension of the Serverless Apache OpenWhisk platform.
0.1: APP first release introduced the APP declarative language used to write scheduling policies in serverless platform. The first release also introduced support for the OpenWhisk platform with an alternative Load Balancer for APP scripts.
0.1-tapp: This release introduces an extension of APP, named tAPP (topology-aware Allocation Priority Policies), that adds the capability to declare topological contraints on function-scheduling. An implementation on top of the OpenWhisk platform is also provided.
0.1-aapp: Another extension, dubbed aAPP (affinity-aware Allocation Priority Policies), that adds the capability to define affinity and anti-affinity constraints on 2 or more functions, together with an updated implementation on OpenWhisk.
In essence, Choral developers program a choreography with the simplicity of a sequential program. Then, through the Choral compiler, they obtain a set of programs that implement the roles acting in the distributed system. The generated programs coordinate in a decentralised way and they faithfully follow the specification from their source choreography, avoiding possible incompatibilities arising from discordant manual implementations. Programmers can use or distribute the single implementations of each role to their customers with a higher level of confidence in their reliability. Moreover, they can reliably compose different Choral(-compiled) programs, to mix different protocols and build the topology that they need.
Choral currently interoperates with Java (and it is planned to support also other programming languages) at three levels: 1) its syntax is a direct extension of Java (if you know Java, Choral is just a step away), 2) Choral code can reuse Java libraries, 3) the libraries generated by Choral are in pure Java with APIs that the programmer controls, and that can be used inside of other Java projects directly.
Choral is a language for the programming of choreographies. A choreography is a multiparty protocol that defines how some roles (the proverbial Alice, Bob, etc.) should coordinate with each other to do something together.
Choral is designed to help developers program distributed authentication protocols, cryptographic protocols, business processes, parallel algorithms, or any other protocol for concurrent and distributed systems. At the press of a button, the Choral compiler translates a choreography into a library for each role. Developers can use the generated libraries to make sure that their programs (like a client, or a service) follow the choreography correctly. Choral makes sure that the generated libraries are compliant implementations of the source choreography.
Corinne relies on the theory of choreography automata, which is described in:
Franco Barbanera, Ivan Lanese, Emilio Tuosto: Choreography Automata. COORDINATION 2020: 86-106
Franco Barbanera, Ivan Lanese, Emilio Tuosto: Composition of choreography automata. CoRR abs/2107.06727 (2021)
Ranflood is an anti-crypto-ransomware tool that counteracts the encryption phase by flooding specific folders (e.g., the attacked location, the user's folders) with decoy files and helps users recover their files after an attack.
This action has a twofold effect.
First, it confounds the genuine files of the user with decoy files, causing the attacking ransomware to waste time on sacrificial data rather than on the victim's genuine files.
Second, the file-flooding IO-intensive activity contends with the ransomware to access the victim's computing resources, further slowing down the attack of the malware.
First release of Ranflood.
At the moment, Ranflood supports three types of flooding strategies:
- random: based on the flooding of a given location with randomly generated files.
- on-the-fly: a copy-based strategy, where the generation of files uses copies of actual files found at a flooding location. File replication adds a layer of defence as it helps to increase the likelihood of preserving the users’ files by generating additional, valid copies that might escape the ransomware. This strategy introduces a new snapshoot action that precedes the flooding one, which saves a list of the valid files with their digest signature (e.g., MD5), so that the flooding operations can use the signature as an integrity verification to skip the encrypted files.
- shadow: another copy-based strategy that increases the efficiency of the on-the-fly strategy by preserving archival copies of the files of the user.
In Service-Oriented Computing (SOC), programmers specify systems made of interacting and collaborating components by describing the “local behavior” of each of them (i.e., the way one component may interact with the others), and/or the “global behavior” of the system (i.e., the expected interaction protocols that should take place within the system).
From the language perspective, orchestration and choreography respectively address the programming of “local” and “global” behaviors. Regarding applications, where usually SOC meets Cloud Computing, we find two state-of-the-art architectural styles. Microservices are a revisitation of service-oriented architectures where fine-grained, loosely coupled, independent services help developers assemble reusable, flexible, and scalable architectures. Serverless is a programming style and deployment technique where users program Cloud applications in terms of stateless functions, which execute and scale in proportion to inbound requests.
Communication is an essential element of modern software, yet the programming and analysis of communicating systems are difficult tasks. A reason for this difficulty is the lack of compositional mechanisms that preserve relevant communication properties.
In 12, we present a comprehensive treatment of the case of synchronous communications. We consider both symmetric synchronous communications and asymmetric synchronous communications (where senders decide independently which message should be exchanged). The composition mechanism preserves different properties under different conditions depending on the considered type of synchronous communication. We show here that the preservation of lock freedom requires an additional condition on gateways for asymmetric communication. Such a condition is also needed for the preservation of deadlock freedom, lock freedom or strong lock freedom for symmetric communications. This is not needed, instead, for the preservation of either deadlock freedom or strong lock freedom with asymmetric interactions.
We also worked on formalizations and implementations of choreographic languages.
In 11, we introduce a meta-model based on formal languages, dubbed formal choreographic languages, to study message-passing systems. Our framework allows us to generalize standard constructions from the literature and to compare them. In particular, we consider notions such as global view, local view, and projections from the former to the latter. The correctness of local views projected from global views is characterized in terms of a closure property. We consider a number of communication properties – such as (dead)lock-freedom – and give conditions on formal choreographic languages to guarantee them. Finally, we show how formal choreographic languages can capture existing formalisms; specifically we consider communicating finite-state machines, choreography automata, and multiparty session types. Notably, formal choreographic languages, differently from most approaches in the literature, can naturally model systems exhibiting non-regular behavior.
In the paradigm of choreographic programming, choreographies are programs that can be compiled into executable implementations. Choreographic Programming originated primarily in the context of process calculi, with preliminary work done to establish its foundations and experiment with implementations.
In 16, we present Choral, the first choreographic programming language based on mainstream abstractions. The key idea in Choral is a new notion of data type able to express data distribution over different participants. We use this idea to reconstruct the paradigm of choreographic programming through object-oriented abstractions. Choreographies are classes, and instances of choreographies are objects with states and behaviors implemented collaboratively by the participants. Choral comes with a compiler that, given a choreography, generates an implementation for each of its roles. These implementations are libraries in pure Java, whose types are under the control of the Choral programmer. Developers can then modularly compose these libraries in their programs, to participate correctly in choreographies. Choral is the first incarnation of choreographic programming offering such modularity, which finally connects more than a decade of research on the paradigm to practical software development. The integration of choreographic and object-oriented programming yields other powerful advantages, where the features of one paradigm benefit the other in ways that go beyond the sum of the parts. On the one hand, the high-level abstractions and static checks from the world of choreographies can be used to write concurrent and distributed object-oriented software more concisely and correctly. On the other hand, we obtain a much more expressive choreographic language from object-oriented abstractions than in previous work. This expressiveness supports writing more reusable and flexible choreographies. For example, object passing makes Choral the first higher-order choreographic programming language, whereby one can parametrize choreographies over other choreographies without the need for central coordination. We also extend method overloading to a new dimension: specialization based on data location. The integration of overloading, together with subtyping and generics, allows Choral to elegantly support user-defined communication mechanisms and middleware.
The service-oriented programming language Jolie is a long-standing project within Focus.
In 17 we introduce LEMMA2Jolie, a tool for translating domain models of microservice architectures given in LEMMA into concrete APIs of microservices in the Jolie programming language. The tool combines the state of the art for the design and implementation of microservices: developers can use Domain-Driven Design (DDD) for the construction of the domain models of a microservice architecture, and then automatically transition to a service-oriented programming language that provides native linguistic support for implementing the behavior of each microservice. In 28 we formally define and integrate into the LEMMA2Jolie tool a translation of domain and service models. As a result, LEMMA2Jolie now supports a software development process whereby one can design microservice architectures in collaboration with domain experts in LEMMA, and then automatically translate the design into programmable Jolie APIs.
Another work related to Jolie is JoT 29, a testing framework for Microservice Architectures (MSAs) based on technology agnosticism, a core principle of microservices. The main advantage of JoT is that it reduces the amount of work for a) testing for MSAs whose services use different technology stacks, b) writing tests that involve multiple services, and c) reusing tests of the same MSA under different deployment configurations or after changing some of its components (e.g., when, for performance, one reimplements a service with a different technology). In JoT, tests are orchestrators that can both consume or offer operations from/to the MSA under test. The language for writing JoT tests is Jolie, which provides constructs that support technology agnosticism and the definition of terse test behaviors.
APP (and variants) is a platform-agnostic declarative language developed within Focus that allows serverless platforms to support multiple scheduling logics. Indeed, proprietary and open-source serverless platforms follow opinionated, hardcoded scheduling policies to deploy the functions to be executed over the available workers. Such policies may decrease the performance and the security of the application due to locality issues (e.g., functions executed by workers far from the databases to be accessed). APP helps in overcoming these limitations. However, defining the “right” scheduling policy in APP is a non-trivial task since it often requires rounds of refinement involving knowledge of the underlying infrastructure, guesswork, and empirical testing. In 32 we present a gentle introduction to APP through an illustrative application developed over several incremental steps to help developers identify and specify relevant properties of their serverless architectures. In 31, we start investigating how one can use static analysis to inform APP scheduling function policies for selecting the best-performing workers at function allocation. We substantiate our proposal by presenting a pipeline able to extract cost equations from functions' code, synthesizing cost expressions through the usage of off-the-shelf solvers, and extending APP allocation policies to consider this information.
Ransomware is one of the most infamous kinds of malware, particularly the “crypto” subclass, which encrypts users' files, asking for some monetary ransom in exchange for the decryption key. Recently, crypto-ransomware grew into a scourge for enterprises and governmental institutions. The most recent and impactful cases include an oil company in the US, an international Danish shipping company, and many hospitals and health departments in Europe. Attacks result in production lockdowns, shipping delays, and even risks to human lives. To contrast ransomware attacks (crypto, in particular), in 14 we propose a family of solutions, called Data Flooding against Ransomware (DFaR), tackling the main phases of detection, mitigation, and restoration, based on a mix of honeypots, resource contention, and moving target defence. These solutions hinge on detecting and contrasting the action of ransomware by flooding specific locations (e.g., the attack location, sensitive folders, etc.) of the victim's disk with files. Building on the DFaR approach, in 15 we present an open-source tool, called Ranflood, that implements the mitigation and restoration phases. In particular, Ranflood supports three flooding strategies, apt for different attack scenarios. At its core, Ranflood buys time for the user to counteract the attack, e.g., to access an unresponsive, attacked server and shut it down manually.
We have continued the study of reversibility started in the past years. We have provided a first attempt of a taxonomy to categorize approaches to reversible computing 30, focusing on the intrinsic features of the reversibility mechanism, and abstracting away from the different underlying models and application areas. We hope that such a work will shed light on the relation among the various approaches. We then concentrated on two specific approaches, causal-consistent reversibility, as used in concurrent systems, and time reversibility, as used in Markov Chains. In the former, any action can be undone provided that its consequences, if any, are undone beforehand. The latter instead stipulates that the stochastic behavior of a system remains the same when the direction of time is reversed, which supports efficient performance evaluation. As main result, we show that causal-consistent reversibility is a sufficient condition for time reversibility 21.
In Focus, we are interested in studying quantitative aspects of higher-order programs, such as resource consumption, not necessarily only in a pure setting but also when placed in an interactive scenario, for instance the one of concurrent systems. Motivated by the use of randomization as a mean to make algorithms more efficient (on average), by the relative recent advent of Bayesian languages, and the significance development of quantum models of computation, our focus extended towards probabilistic and quantum languages.
In addition to the analysis of complexity properties, which can be seen as a property of individual programs, Focus has also been interested, for some years now, in the study of relational properties of programs. More specifically, we are interested in how to evaluate the differences between behaviors of distinct programs, going beyond the concept of program equivalence, but also beyond that of metrics. In this way, approximate correct program transformations can be justified, while it also becomes possible to give a measure of how close a program is to a certain specification.
These trends extended to 2023. Below we describe the results obtained by Focus this year, dividing them into several strands.
In Focus, we are interested in studying notions of termination and resource analysis for non-standard computing paradigms, like those induced by the presence of randomized and quantum effects. For instance, over the last years we have developed the cost analyser eco-imp 6.1.5, which can derive bounds on the average runtime of probabilistic imperative programs, in a fully automated manner. This year, we have extended the tool along two axes. First, we have extended the capabilities of the language, for instance, the new version is capable of analyzing recursively defined functions. Second, the tool has been extended from reasoning about expected runtime to a broader set of events, through the analyzed notion of expected outcomes. All of this required significant extensions to the underlying inference machinery, described in 10.
Another very important research axis within Focus is that about the study of metrics and quantitative reasoning on programs, supported by tools such as relational logic and program metrics. This year there have been a number of contributions in this direction.
We first of all looked at Mardare et al.’s quantitative algebras, and at whether they can be adapted to the structures which naturally emerge from Combinatory Logic and the
We then introduced contextual behavioral metrics (CBMs) as a novel way of measuring the discrepancy in behavior between processes in a concurrent scenario, taking into account both quantitative aspects and contextual information. This way, process distances by construction take the environment into account: two (non-equivalent) processes may still exhibit very similar behavior in some contexts, e.g., when certain actions are never performed. We first show how CBMs capture many well-known notions of equivalence and metric, including Larsen's environmental parametrized bisimulation. We then study compositional properties of CBMs with respect to some common process algebraic operators, namely prefixing, restriction, non-deterministic sum, parallel composition and replication 26.
Finally, we also studied higher-order logic and its role in quantitative reasoning. More specifically, we introduce a variation on Barthe et al.’s higher-order logic in which formulas are interpreted as predicates over open rather than closed objects. This way, concepts which have an intrinsically functional nature, like continuity, differentiability, or monotonicity, can be expressed and reasoned about in a very natural way, following the structure of the underlying program. We give open higher-order logic in distinct flavors, and in particular in its relational and local versions, the latter being tailored for situations in which properties hold only in part of the underlying function’s domain of definition 24.
Another topic of interest in Focus is the study of abstract machines for the implementation of high-level languages and in particular the analysis of their performance. In the last years, we have been concerned with the study of abstract machines derived from Girard's Geometry of Interaction. In a joint work between Dal Lago and Vanoni, we study one of the two formulations of the interaction abstract machine, namely that obtained from the so-called “boring” translation of intuitionistic logic into linear logic. We prove the correctness of the resulting call-by-name machine, at the same time establishing an improved bisimulation with Krivine’s abstract machine. The proof makes essential use of the definition of a novel relational property linking configurations of the two machines 27. This turned out to be a surprising fact, because the “boring” translation is well-known to be related to call-by-value evaluation.
In the realm of quantum computing, circuit description languages represent a valid alternative to traditional QRAM-style languages. They indeed allow for finer control over the output circuit, without sacrificing flexibility nor modularity. We introduce in 23 a generalization of the paradigmatic lambda-calculus Proto-Quipper-M, which models the core features of the quantum circuit description language Quipper. The extension, called Proto-Quipper-K, is meant to capture a very general form of dynamic lifting. This is made possible by the introduction of a rich type and effect system in which not only computations, but also the very types are effectful. The main results we give for the introduced language are the classic type soundness results, namely subject reduction and progress.
In the area of qualitative semantics, during the past year our efforts have gone mainly in the direction of "unifying semantics", which has been our major direction in the past few years.
In particular we have deepened the study of the comparison between the the
To celebrate the 30th edition of EXPRESS (Expressiveness in Concurrency) and the 20th edition of SOS (Structural Operational Semantics) we have produced an overview on how
session types can be expressed in a type theory for the standard
We designed and tested an interdisciplinary training module on cryptography 13 for prospective STEM teachers that leveraged some “boundary objects” between Math and CS (e.g., adjacency matrices, graphs, computational complexity, factoring) in an important social context (the debate on the benefits and risks of end-to-end cryptography). The module proved useful in making students mobilize concepts, methods, and practices of the two disciplines and making them move between semiotic representations of the interdisciplinary objects involved.
We co-designed with teachers 22 a learning module to teach iteration to second graders using a visual programming environment and following the Use-Modify-Create methodology. The co-designed learning module was piloted with three second-grade classes. Sharing the different perspectives of researchers and teachers improved the quality of the resulting learning module and constituted a very significant professional development opportunity for both teachers and researchers.
We studied 20 the problem-solving ability of GPT-3 in solving tasks proposed in the Bebras challenge. GPT-3 was able to answer with a majority of correct answers in about one-third of the Bebras tasks. It provided explanations that sounded correct but often logically wrong. The system was good in applying procedures, but quite bad at synthesis or logically complex tasks.
Giallorenzo co-leads a three-year project collaboration, called “Ranflood”, started in July 2021, between the “Regional Environmental Protection and Energy Agency” of Emilia-Romagna (ARPAE Emilia-Romagna) and the “Department of Computer Science and Engineering” (DISI) at the University of Bologna. The collaboration regards the development of techniques and software to combat the spread of malware by exploiting resource contention.
is a Marie-Curie Postdoc Fellowship started in December 2023 and with a 2 years duration. The fellow is Vikraman Choudhury, supervised by Ivan Lanese. The project tackles gray debugging of concurrent systems. Debugging concurrent systems is notoriously hard. Reversible causal-consistent debugging and replay allows one to log a faulty execution in production environment and replay it in the debugger. There, it can be explored backwards and forwards following causality links from the visible misbehavior to the bug causing it. ReGraDe-CS will extend the approach to gray debugging, namely debugging of systems where only part of the source code is accessible (e.g., the system invokes external services such as Google Maps).
is a European Project H2020-MSCA-RISE-2017, running in the period March 2018 – February 2024. The topic of the project is behavioral types, as a suite of technologies that formalize the intended usage of API interfaces. Indeed, currently APIs are typically flat structures, i.e. sets of service/method signatures specifying the expected service parameters and the kind of results one should expect in return. However, correct API usage also requires the individual services to be invoked in a specific order. Despite its importance, the latter information is either often omitted, or stated informally via textual descriptions. The expected benefits of behavioral types include guarantees such as service compliance, deadlock freedom, dynamic adaptation in the presence of failure, load balancing etc. The project aims to bring the existing prototype tools based on these technologies to mainstream programming languages and development frameworks used in industry.
FREEDA (Failure-Resilient, Energy-aware, and Explainable Deployment of microservice-based Applications over Cloud-IoT infrastructures) is a PRIN PE6 24-month project started on October 2023.
The evolution of Cloud computing, driven by the demands of smart connected devices, calls for a transition to pervasive distributed environments at the network's Edge. The increasing use of Microservice-based Applications (MSAs) in enterprise settings, along with the expansion of Cloud-IoT infrastructures, needs careful deployment planning. FREEDA aims to facilitate comprehensive MSA deployment over Cloud-IoT infrastructures by analyzing deployment requirements, considering factors like MSA complexity, failure resilience, energy consumption, and compliance with sustainable IT standards. The system utilizes constraint reasoning to effectively balance conflicting deployment requirements and employs continuous reasoning to adapt quickly to changes in the deployed MSA and Cloud-IoT infrastructure.
DCore (Causal debugging for concurrent systems) is an ANR project that started on March 2019 and that will end in March 2024.
The overall objective of the project is to develop a semantically well-founded, novel form of concurrent debugging, which we call “causal debugging”. Causal debugging will comprise and integrate two main engines: (i) a reversible execution engine that allows programmers to backtrack and replay a concurrent or distributed program execution and (ii) a causal analysis engine that allows programmers to analyze concurrent executions to understand why some desired program properties could be violated.
PROGRAMme (What is a program? Historical and philosophical perspectives) is an ANR project started on October 2017 and that has finished on October 2023 (extension of one year granted);
The aim of this project is to develop a coherent analysis and pluralistic understanding of “computer program” and its implications to theory and practice.
PPS (Probabilistic Programming Semantics) is an ANR PCR project that started on January 2020 and that will finish on December 2024.
Probabilities are essential in Computer Science. Many algorithms use probabilistic choices for efficiency or convenience and probabilistic algorithms are crucial in communicating systems. Recently, probabilistic programming, and more specifically, functional probabilistic programming, has become crucial in various works in Bayesian inference and Machine Learning. Motivated by the rising impact of such probabilistic languages, the aim of this project is to develop formal methods for probabilistic computing (semantics, type systems, logical frameworks for program verification, abstract machines etc.) to systematize the analysis and certification of functional probabilistic programs.