DIVERSE's research agenda targets core values of software engineering.
In this fundamental domain we focus on and develop models, methodologies and theories to address major challenges raised by the emergence of several forms of diversity in the design, deployment and evolution of software-intensive systems.
Software diversity has emerged as an essential phenomenon in all application domains borne by our industrial partners. These application domains range from complex systems brought by systems of systems (addressed in collaboration with Thales, Safran, CEA and DGA) and Instrumentation and Control (addressed with EDF) to pervasive combinations of Internet of Things and Internet of Services (addressed with TellU and Orange) and tactical information systems (addressed in collaboration with civil security services).
Today these systems seem to be all radically different, but we envision a strong convergence of the scientific principles that underpin their construction and validation, bringing forwards sane and reliable methods for the design of flexible and open yet dependable systems.
Flexibility and openness are both critical and challenging software layer properties that must deal with the following four dimensions of diversity: diversity of languages, used by the stakeholders involved in the construction of these systems; diversity of features, required by the different customers; diversity of runtime environments, where software has to run and adapted; diversity of implementations, which are necessary for resilience by redundancy.
In this context, the central software engineering challenge consists in handling diversity from variability in requirements and design to heterogeneous and dynamic execution environments.
In particular, this requires considering that the software system must adapt, in unpredictable yet valid ways, to changes in the requirements as well as in its environment.
Conversely, explicitly handling diversity is a great opportunity to allow software to spontaneously explore alternative design solutions, and to mitigate security risks.
Concretely, we want to provide software engineers with the following abilities:
The major scientific objective that we must achieve to provide such mechanisms for software engineering is summarized below:
Scientific objective for DIVERSE: To automatically compose and synthesize software diversity from design to runtime to address unpredictable evolution of software-intensive systems
Software product lines and associated variability modeling formalisms represent an essential aspect of software diversity, which we already explored in the past, and this aspect stands as a major foundation of DIVERSE's research agenda. However, DIVERSE also exploits other foundations to handle new forms of diversity: type theory and models of computation for the composition of languages; distributed algorithms and pervasive computation to handle the diversity of execution platforms; functional and qualitative randomized transformations to synthesize diversity for robust systems.
Applications are becoming more complex and the demand for faster development is increasing. In order to better adapt to the unbridled evolution of requirements in markets where software plays an essential role, companies are changing the way they design, develop, secure and deploy applications, by relying on:
These trends are set to continue, all the while with a strong concern about the security properties of the produced and distributed software.
The numbers in the examples below help to understand why this evolution of modern software engineering brings a change of dimension:
The DIVERSE research project is working and evolving in the context of this acceleration.
We are active at all stages of the software supply chain.
Software supply chain covers all the activities and all the stakeholders that relate to software production and delivery.
All these activities and stakeholders have to be smartly managed together as part of an overall strategy.
The goal of supply chain management (SCM) is to meet customer demands with the most efficient use of resources possible.
In this context, DIVERSE is particularly interested in the following research questions:
Model-Driven Engineering (MDE) aims at reducing the accidental complexity associated with developing complex software-intensive systems (e.g., use of abstractions of the problem space rather than abstractions of the solution space) 131. It provides DIVERSE with solid foundations to specify, analyze and reason about the different forms of diversity that occur throughout the development life cycle. A primary source of accidental complexity is the wide gap between the concepts used by domain experts and the low-level abstractions provided by general-purpose programming languages 103. MDE approaches address this problem through modeling techniques that support separation of concerns and automated generation of major system artifacts from models (e.g., test cases, implementations, deployment and configuration scripts). In MDE, a model describes an aspect of a system and is typically created or derived for specific development purposes 86. Separation of concerns is supported through the use of different modeling languages, each providing constructs based on abstractions that are specific to an aspect of a system. MDE technologies also provide support for manipulating models, for example, support for querying, slicing, transforming, merging, and analyzing (including executing) models. Modeling languages are thus at the core of MDE, which participates in the development of a sound Software Language Engineering, including a unified typing theory that integrates models as first class entities 133.
Incorporating domain-specific concepts and a high-quality development experience into MDE technologies can significantly improve developer productivity and system quality. Since the late nineties, this realization has led to work on MDE language workbenches that support the development of domain-specific modeling languages (DSMLs) and associated tools (e.g., model editors and code generators). A DSML provides a bridge between the field in which domain experts work and the implementation (programming) field. Domains in which DSMLs have been developed and used include, among others, automotive, avionics, and cyber-physical systems. A study performed by Hutchinson et al. 108 indicates that DSMLs can pave the way for wider industrial adoption of MDE.
More recently, the emergence of new classes of systems that are complex and operate in heterogeneous and rapidly changing environments raises new challenges for the software engineering community. These systems must be adaptable, flexible, reconfigurable and, increasingly, self-managing. Such characteristics make systems more prone to failure when running and thus the development and study of appropriate mechanisms for continuous design and runtime validation and monitoring are needed. In the MDE community, research is focused primarily on using models at the design, implementation, and deployment stages of development. This work has been highly productive, with several techniques now entering a commercialization phase. As software systems are becoming more and more dynamic, the use of model-driven techniques for validating and monitoring runtime behavior is extremely promising 117.
While the basic vision underlying Software Product Lines (SPL) can
probably be traced back to David Parnas' seminal article 124 on
the Design and Development of Program Families, it is only quite recently that
SPLs have started emerging as a paradigm shift towards modeling and developing
software system families rather than individual
systems 121. SPL engineering embraces the ideas of mass
customization and software reuse. It focuses on the means of efficiently
producing and maintaining multiple related software products, exploiting what
they have in common and managing what varies among them.
Several definitions of the software product line concept can be found
in the research literature. Clements et al. define it as a set of
software-intensive systems sharing a common, managed set of features that
satisfy the specific needs of a particular market segment or mission and are
developed from a common set of core assets in a prescribed way
122. Bosch provides a different definition 92:
A SPL consists of a product line architecture and a set of reusable
components designed for incorporation into the product line architecture. In
addition, the PL consists of the software products developed using the
mentioned reusable assets. In spite of the similarities, these definitions
provide different perspectives of the concept: market-driven, as seen
by Clements et al., and technology-oriented for Bosch.
SPL engineering is a process focusing on capturing the commonalities
(assumptions true for each family member) and variability
(assumptions about how individual family members differ) between several
software products 98. Instead of describing a single software
system, a SPL model describes a set of products in the same domain. This is
accomplished by distinguishing between elements common to all SPL members, and
those that may vary from one product to another. Reuse of core assets, which
form the basis of the product line, is key to productivity and quality
gains. These core assets extend beyond simple code reuse and may include the
architecture, software components, domain models, requirements statements,
documentation, test plans or test cases.
The SPL engineering process consists of two major steps:
Central to both processes is the management of variability across
the product line 105. In common language use, the term
variability refers to the ability or the tendency to
change. Variability management is thus seen as the key feature that
distinguishes SPL engineering from other software development approaches 93. Variability management is thus increasingly seen as the
cornerstone of SPL development, covering the entire development life cycle,
from requirements elicitation 135 to product
derivation 139 to product testing 120, 119.
Halmans et al. 105 distinguish between essential and
technical variability, especially at the requirements level. Essential
variability corresponds to the customer's viewpoint, defining what to
implement, while technical variability relates to product family engineering,
defining how to implement it. A classification based on the dimensions of
variability is proposed by Pohl et al. 126: beyond
variability in time (existence of different versions of an artifact
that are valid at different times) and variability in space
(existence of an artifact in different shapes at the same time) Pohl et al. claim that variability is important to different stakeholders and thus has
different levels of visibility: external variability is visible to
the customers while internal variability, that of domain artifacts,
is hidden from them. Other classification proposals come from Meekel et al. 114 (feature, hardware platform, performance and attributes
variability) or Bass et al. 84 who discusses about variability
at the architectural level.
Central to the modeling of variability is the notion of feature,
originally defined by Kang et al. as: a prominent or distinctive user-visible
aspect, quality or characteristic of a software system or
systems 110. Based on this notion of feature, they proposed to use a
feature model to model the variability in a SPL. A
feature model consists of a feature diagram and other associated
information: constraints and dependency rules. Feature
diagrams provide a graphical tree-like notation depicting the
hierarchical organization of high level product functionalities represented
as features. The root of the tree refers to the complete system and is
progressively decomposed into more refined features (tree nodes). Relations
between nodes (features) are materialized by decomposition edges and
textual constraints. Variability can be expressed in several
ways. Presence or absence of a feature from a product is modeled using
mandatory or optional features. Features are graphically
represented as rectangles while some graphical elements (e.g., unfilled
circle) are used to describe the variability (e.g., a feature may be
optional).
Features can be organized into feature groups. Boolean operators
exclusive alternative (XOR), inclusive alternative (OR) or
inclusive (AND) are used to select one, several or all the features
from a feature group. Dependencies between features can be modeled using
textual constraints: requires (presence of a feature requires
the presence of another), mutex (presence of a feature automatically
excludes another). Feature attributes can be also used for modeling quantitative (e.g., numerical) information.
Constraints over attributes and features can be specified as well.
Modeling variability allows an organization to capture and select which version of which variant of any particular aspect is wanted in the system 93. To implement it cheaply, quickly and safely, redoing by hand the tedious weaving of every aspect is not an option: some form of automation is needed to leverage the modeling of variability 88. Model Driven Engineering (MDE) makes it possible to automate this weaving process 109. This requires that models are no longer informal, and that the weaving process is itself described as a program (which is as a matter of fact an executable meta-model 118) manipulating these models to produce for instance a detailed design that can ultimately be transformed to code, or to test suites 125, or other software artifacts.
Component-based software development 134 aims at providing reliable software architectures with a low cost of design. Components are now used routinely in many domains of software system designs: distributed systems, user interaction, product lines, embedded systems, etc. With respect to more traditional software artifacts (e.g., object oriented architectures), modern component models have the following distinctive features 99: description of requirements on services required from the other components; indirect connections between components thanks to ports and connectors constructs 112; hierarchical definition of components (assemblies of components can define new component types); connectors supporting various communication semantics 96; quantitative properties on the services 91.
In recent years component-based architectures have evolved from static designs to dynamic, adaptive designs (e.g., SOFA 96, Palladio 89, Frascati 115). Processes for building a system using a statically designed architecture are made of the following sequential lifecycle stages: requirements, modeling, implementation, packaging, deployment, system launch, system execution, system shutdown and system removal. If for any reason after design time architectural changes are needed after system launch (e.g., because requirements changed, or the implementation platform has evolved, etc) then the design process must be reexecuted from scratch (unless the changes are limited to parameter adjustment in the components deployed).
Dynamic designs allow for on the fly redesign of a component based system.
A process for dynamic adaptation is able to reapply the design phases while the system is up and running, without stopping it (this is different from a stop/redeploy/start process).
Dynamic adaptation processes support chosen adaptation, when changes are planned and realized to maintain a good fit between the needs that the system must support and the way it supports them 111.
Dynamic component-based designs rely on a component meta-model that supports complex life cycles for components, connectors, service specification, etc.
Advanced dynamic designs can also take platform changes into account at runtime, without human intervention, by adapting themselves 97, 137.
Platform changes and more generally environmental changes trigger imposed adaptation, when the system can no longer use its design to provide the services it must support.
In order to support an eternal system 90, dynamic component based systems must separate architectural design and platform compatibility.
This requires support for heterogeneity, since platform evolution can be partial.
The Models@runtime paradigm denotes a model-driven approach aiming at taming the complexity of dynamic software systems. It basically pushes the idea of reflection one step further by considering the reflection layer as a real model “something simpler, safer or cheaper than reality to avoid the complexity, danger and irreversibility of reality 129”. In practice, component-based (and/or service-based) platforms offer reflection APIs that make it possible to introspect the system (to determine which components and bindings are currently in place in the system) and dynamic adaptation (by applying CRUD operations on these components and bindings). While some of these platforms offer rollback mechanisms to recover after an erroneous adaptation, the idea of Models@runtime is to prevent the system from actually enacting an erroneous adaptation. In other words, the “model at run-time” is a reflection model that can be uncoupled (for reasoning, validation, simulation purposes) and automatically resynchronized.
Heterogeneity is a key challenge for modern component based systems.
Until recently, component based techniques were designed to address a specific domain, such as embedded software for command and control, or distributed Web based service oriented architectures.
The emergence of the Internet of Things paradigm calls for a unified approach in component based design techniques.
By implementing an efficient separation of concern between platform independent architecture management and platform dependent implementations,
Models@runtime is now established as a key technique to support dynamic component based designs. It provides DIVERSE with an essential foundation to explore an adaptation envelope at run-time.
The goal is to automatically explore a set of alternatives and assess their relevance with respect to the considered problem.
These techniques have been applied to craft software architecture exhibiting high quality of services properties 104.
Multi Objectives Search based techniques 101 deal with optimization problem containing several (possibly conflicting) dimensions to optimize.
These techniques provide DIVERSE with the scientific foundations for reasoning and efficiently exploring an envelope of software configurations at run-time.
Validation and verification (V&V) theories and techniques provide the means to assess the validity of a software system with respect to a specific correctness envelope. As such, they form an essential element of DIVERSE's scientific background. In particular, we focus on model-based V&V in order to leverage the different models that specify the envelope at different moments of the software development lifecycle.
Model-based testing consists in analyzing a formal model of a system (e.g., activity diagrams, which capture high-level requirements about the system, statecharts, which capture the expected behavior of a software module, or a feature model, which describes all possible variants of the system) in order to generate test cases that will be executed against the system. Model-based testing 136 mainly relies on model analysis, constraint solving 100 and search-based reasoning 113. DIVERSE leverages in particular the applications of model-based testing in the context of highly-configurable systems and 138 interactive systems 116 as well as recent advances based on diversity for test cases selection 107.
Nowadays, it is possible to simulate various kinds of models. Existing tools range from industrial tools such as Simulink, Rhapsody or Telelogic to academic approaches like Omega 123, or Xholon. All these simulation environments operate on homogeneous environment models. However, to handle diversity in software systems, we also leverage recent advances in heterogeneous simulation. Ptolemy 95 proposes a common abstract syntax, which represents the description of the model structure. These elements can be decorated using different directors that reflect the application of a specific model of computation on the model element. Metropolis 85 provides modeling elements amenable to semantically equivalent mathematical models. Metropolis offers a precise semantics flexible enough to support different models of computation. ModHel'X 106 studies the composition of multi-paradigm models relying on different models of computation.
Model-based testing and simulation are complemented by runtime fault-tolerance through the automatic generation of software variants that can run in parallel, to tackle the open nature of software-intensive systems. The foundations in this case are the seminal work about N-version programming 83, recovery blocks 128 and code randomization 87, which demonstrated the central role of diversity in software to ensure runtime resilience of complex systems. Such techniques rely on truly diverse software solutions in order to provide systems with the ability to react to events, which could not be predicted at design time and checked through testing or simulation.
The rigorous, scientific evaluation of DIVERSE's contributions is an essential aspect of our research methodology. In addition to theoretical validation through formal analysis or complexity estimation, we also aim at applying state-of-the-art methodologies and principles of empirical software engineering. This approach encompasses a set of techniques for the sound validation contributions in the field of software engineering, ranging from statistically sound comparisons of techniques and large-scale data analysis to interviews and systematic literature reviews 132, 130. Such methods have been used for example to understand the impact of new software development paradigms 94. Experimental design and statistical tests represent another major aspect of empirical software engineering. Addressing large-scale software engineering problems often requires the application of heuristics, and it is important to understand their effects through sound statistical analyses 82.
DIVERSE explore Software Diversity.
Leveraging our strong background on Model-Driven Engineering, and our large expertise on several related fields (programming languages, distributed systems, GUI, machine learning, security...), we explore tools and methods to embrace the inherent diversity in software engineering, from the stakeholders and underlying tool-supported languages involved in the software system life cycle, to the configuration and evolution space of the modern software systems, and the heterogeneity of the targeted execution platforms. Hence, we organize our research directions according to three axes (cf. Fig. 1):
The disruptive design of new, complex systems requires a high degree of flexibility in the communication between many stakeholders, often limited by the silo-like structure of the organization itself (cf. Conway’s law). To overcome this constraint, modern engineering environments aim to: (i) better manage the necessary exchanges between the different stakeholders; (ii) provide a unique and usable place for information sharing; and (iii) ensure the consistency of the many points of view.
Software languages are the key pivot between the diverse stakeholders involved, and the software systems they have to implement.
Domain-Specific (Modeling) Languages enable stakeholders to address the diverse concerns through specific points of view, and their coordinated use is essential to support the socio-technical coordination across the overall software system life cycle.
Our perspectives on Software Language Engineering over the next period is presented in Figure 2 and detailed in the following paragraphs.
Providing rich and adequate environments is key to the adoption of domain-specific languages. In particular, we focus on tools that support model and program execution. We explore the foundations to define the required concerns in language specification, and systematic approaches to derive environments (e.g., IDE, notebook, design labs) including debuggers, animators, simulators, loggers, monitors, trade-off analysis, etc.
IDEs are indispensable companions to software languages. They are increasingly turning towards Web-based platforms, heavily relying on cloud infrastructures and forges. Since all language services require different computing capacities and response times (to guarantee a user-friendly experience within the IDE) and use shared resources (e.g., the program), we explore new architectures for their modularization and systematic approaches for their individual deployment and dynamic adaptation within an IDE. To cope with the ever-growing number of programming languages, manufacturers of Integrated Development Environments (IDE) have recently defined protocols as a way to use and share multiple language services in language-agnostic environments. These protocols rely on a proper specification of the services that are commonly found in the tool support of general-purpose languages, and define a fixed set of capabilities to offer in the IDE. However, new languages regularly appear offering unique constructs (e.g., DSLs), and which are supported by dedicated services to be offered as new capabilities in IDEs. This trend leads to the multiplication of new protocols, hard to combine and possibly incompatible (e.g., overlap, different technological stacks). Beyond the proposition of specific protocols, we will explore an original approach to be able to specify language protocols and to offer IDEs to be configured with such protocol specifications. IDEs went from directly supporting languages to protocols, and we envision the next step: IDE as code, where language protocols are created or inferred on demand and serve as support of an adaptation loop taking in charge of the (re)configuration of the IDE.
Web-based and cloud-native IDEs open new opportunities to bridge the gap between the IDE and collaborative platforms, e.g., forges. In the complex world of software systems, we explore new approaches to reduce the distance between the various stakeholders (e.g., systems engineers and all those involved in specialty engineering) and to improve the interactions between them through an adapted tool chain. We aim to improve the usability of development cycles with efficiency, affordance and satisfaction. We also explore new approaches to explore and interact with the design space or other concerns such as human values or security, and provide facilities for trade-off analysis and decision making in the the context of software and system designs.
As of today, polyglot development is massively popular and virtually all software systems put multiple languages to use, which not only complexifies their development, but also their evolution and maintenance. Moreover, as software are more used in new application domains (e.g., data analytics, health or scientific computing), it is crucial to ease the participation of scientists, decision-makers, and more generally non-software experts. Live programming makes it possible to change a program while it is running, by propagating changes on a program code to its run-time state. This effectively bridges the gulf of evaluation between program writing and program execution: the effects a change has on the running system are immediately visible, and the developer can take immediate action. The challenges at the intersection of polyglot and live programming have received little attention so far, and we envision a language design and implementation approach to specify domain-specific languages and their coordination, and automatically provide interactive domain-specific environments for live and polyglot programming.
Over recent years, self-adaptation has become a concern for many software systems that operate in complex and changing environments. At the core of self-adaptation lies a feedback loop and its associated trade-off reasoning, to decide on the best course of action. However, existing software languages do not abstract the development and execution of such feedback loops for self-adaptable systems. Developers have to fall back to ad-hoc solutions to implement self-adaptable systems, often with wide-ranging design implications (e.g., explicit MAPE-K loop). Furthermore, existing software languages do not capitalize on monitored usage data of a language and its modeling environment. This hinders the continuous and automatic evolution of a software language based on feedback loops from the modeling environment and runtime software system. To address the aforementioned issues, we will explore the concept of Self-Adaptable Language (SAL) to abstract the feedback loops at both system and language levels.
Leveraging our longstanding activity on variability management for software product lines and configurable systems covering diverse scenarios of use, we will investigate over the next period the impact of such a variability across the diverse layers, incl. source code, input/output data, compilation chain, operating systems and underlying execution platforms. We envision a better support and assistance for the configuration and optimisation (e.g., non-functional properties) of software systems according to this deep variability. Moreover, as software systems involve diverse artefacts (e.g., APIs, tests, models, scripts, data, cloud services, documentation, deployment descriptors...), we will investigate their continuous co-evolution during the overall lifecycle, including maintenance and evolution. Our perspectives on spatio-temporal variability over the next period is presented in Figure 3 and is detailed in the following paragraphs.
Software systems can be configured to reach specific functional goals and non-functional performance, either statically at compile time or through the choice of command line options at runtime. We observed that considering the software layer only might be a naive approach to tune the performance of the system or to test its functional correctness. In fact, many layers (hardware, operating system, input data, etc.), which are themselves subject to variability, can alter the performance or functionalities of software configurations. We call deep software variability the interaction of all variability layers that could modify the behavior or non-functional properties of a software. Deep software variability calls to investigate how to systematically handle cross-layer configuration. The diversification of the different layers is also an opportunity to test the robustness and resilience of the software layer in multiple environments. Another interesting challenge is to tune the software for one specific executing environment. In essence, deep software variability questions the generalization of the configuration knowledge.
Nowadays, software development has become more and more complex, involving various artefacts, such as APIs, tests, models, scripts, data, cloud services, documentation, etc., and embedding millions of lines of code (LOC). Recent evidence highlights continuous software evolution based on thousands of commits, hundreds of releases, all done by thousands of developers. We focus on the following essential backbone dimensions in software engineering: languages, models, APIs, tests and deployment descriptors, all revolving around software code implementation. We will explore the foundations of a multidimensional and polyglot co-evolution platform, and will provide a better understanding with new empirical evidence and knowledge.
The production and delivery of modern software systems involves the integration of diverse dependencies and continuous deployment on diverse execution platforms in the form of large distributed socio-technical systems.
This leads to new software architectures and programming models, as well as complex supply chains for final delivery to system users.
In order to boost cybersecurity, we want to provide strong support to software engineers and IT teams in the development and delivery of secure and resilient software systems, ie. systems able to resist or recover from cyberattacks.
Our perspectives on DevSecOps and Resilience Engineering over the next period are presented in Figure 4 and detailed in the following paragraphs.
Continuous integration and deployment pipelines are processes implementing complex software supply chains. We envision an explicit and early consideration of security properties in such pipelines to help in detecting vulnerabilities. In particular, we integrate the security concern in Model-Based System Analysis (MBSA) approaches, and explore guidelines, tools and methods to drive the definition of secure and resilient architectures. We also investigate resilience at runtime through frameworks for autonomic computing and data-centric applications, both for the software systems and the associated deployment descriptors.
Dependencies management, Infrastructure as Code (IaC) and DevOps practices open opportunities to analyze complex supply chains. We aim at providing relevant metrics to evaluate and ensure the security of such supply chains, advanced assistants to help in specifying corresponding pipelines, and new approaches to optimize them (e.g., software debloating, scalability...).
We study how supply chains can actively leverage software variability and diversity to increase cybersecurity and resilience.
In order to produce secure and resilient software systems, we explore new secure-by-design foundations that integrate security concerns as first class entities through a seamless continuum from the design to the continuous integration and deployment. We explore new models, architectures, inter-relations, and static and dynamic analyses that rely on explicitly expressed security concerns to ensure a secure and resilient supply chain. We lead research on automatic vulnerability and malware detection in modern supply chains, considering the various artefacts either as white boxes enabling source code analysis (to avoid accidental vulnerabilities or intentional ones or code poisoning), or as black boxes requiring binary analysis (to find malware or vulnerabilities). We also conduct research activities in dependencies and deployment descriptors security analysis.
Information technology affects all areas of society. The need to develop software systems is therefore present in a huge number of application domains. One of the goals of software engineering is to apply a systematic, disciplined, quantifiable approach to the development, operation, and maintenance of software whatever the application domain.
As a result, the team covers a wide range of application domains and never refrains from exploring a particular field of application. Our primary expertise is in complex, heterogeneous and distributed systems. While we historically collaborated with partners in the field of systems engineering, it should be noted that for several years now, we have investigated several new areas in depth:
We share the vision that reducing the environmental footprint of research activities is crucial for promoting sustainability within academic and scientific communities. Here are some examples of actions that we promote within the team:
We encourage virtual seminars (e.g., the creation of the EDT Community (cf. https://edt.community) on the engineering of digital twins) and meetings (not conferences) to reduce the need for long-distance travel. When travel is necessary, we try to opt for modes of transportation with lower carbon footprints, such as trains. We want to share that INRIA has to improve the booking system that do not offer trains that go to London for example, as well as reasonable per diem reimbursements that cover the actual costs (e.g., Amsterdam where even the travel agency is incapable of proposing hotels within the budget) so that as people can stay longer working with colleagues when they have to travel.
We try to engage students of the field through educational outreach: We raise awareness about the importance of environmental sustainability within research communities through educational programs and seminars (We organise ICT4S this year as a joint event with the GDRGPL days). We encourage students to incorporate sustainable practices into their work. We have also started to create scientific results on the impact of software development practices on environmental sustainability. Quentin Perez has been hired as a new faculty member on this research topic.
The DiverSE project-team initiated several research activities at the crossroads of sustainability and software engineering. In particular, the research challenges are twofold: i) GreenIT, and more specifically how to measure the energy consumption of software all along the development life cycle and the DevOps pipelines, and ii) IT for green, more specifically the engineering of digital twins either to optimize and reconfigure, or to support informed decisions in tradeoff analysis and design space exploration. In this context, the project-team organized in 2023 the international conference on Information and Communications Technology for Sustainability (ICT4S), with not only a research program, but also a so called OFF! Program which complements the research program with a set of satellite events bringing together researchers, practitioners, decision and policy makers, artists, students and the general public. It proposed various kinds of events on campus as well as in pubs downtown. In particular, the OFF! Program included general keynotes, panels, debates, art performances, etc.
Moreover, the DiverSE project-team is currently exploring several research axes related to social and environmental challenges, all in a pluri-disciplinary context. In particular, the team is involved in both: i) collaboration with environmental sciences and sociology on the use of climate change scientific models for decision-makers, and ii) collaboration with sociology on the privacy in web applications.
The language workbench put together the following tools seamlessly integrated to the Eclipse Modeling Framework (EMF):
1) Melange, a tool-supported meta-language to modularly define executable modeling languages with execution functions and data, and to extend (EMF-based) existing modeling languages. 2) MoCCML, a tool-supported meta-language dedicated to the specification of a Model of Concurrency and Communication (MoCC) and its mapping to a specific abstract syntax and associated execution functions of a modeling language. 3) GEL, a tool-supported meta-language dedicated to the specification of the protocol between the execution functions and the MoCC to support the feedback of the data as well as the callback of other expected execution functions. 4) BCOoL, a tool-supported meta-language dedicated to the specification of language coordination patterns to automatically coordinates the execution of, possibly heterogeneous, models. 5) Monilog, an extension for monitoring and logging executable domain-specific models 6) Sirius Animator, an extension to the model editor designer Sirius to create graphical animators for executable modeling languages.
The HyperAST is an AST structured as a Direct Acyclic Graph (DAG) (similar to MerkleDAG used in Git). An HyperAST is efficiently constructed by leveraging Git and TreeSitter.
It reimplements the Gumtree algorithm in Rust while using the HyperAST as the underlying AST structure.
It implements a use-def solver, that uses a context-free indexing of references present in subtrees (each subtree has a bloom filter of contained references).
A platform for experimentation as part of the digital twins of Industry 4.0.
As part of the ANR MBDO project in conjunction with our German partners, we are creating a platform to emulate the behaviour of a factory. On the hardware side, this platform consists of a FisherTechnik base. FisherTechnik The digital twins software layer is built using the GEMOC platform. In 2023, we worked mainly on the specification, equipment orders and initial experiments. This platform will be further developed in 2024.
Finding better ways to handle software complexity (both inherent and accidental) is the holy grail for a significant part of the software engineering community, and especially for the Model Driven Engineering (MDE) one. To that purpose, plenty of techniques have been proposed, leading to a succession of trends in model based software developments paradigms in the last decades. While these trends seem to pop out from nowhere, we claim in 65 that most of them actually stem from trying to get a better grasp on the variability of software. We revisit the history of MDE trying to identify the main aspect of variability they wanted to address when they were introduced. We conclude on what are the variability challenges of our time, including variability of data leading to machine learning of models.
Recent results in language engineering simplify the development of tool-supported executable domain-specific modelling languages (xDSMLs), including editing (e.g., completion and error checking) and execution analysis tools (e.g., debugging, monitoring and live modelling). However, such frameworks are currently limited to sequential execution traces, and cannot handle execution traces resulting from an execution semantics with a concurrency model supporting parallelism or interleaving. This prevents the development of concurrency analysis tools, like debuggers supporting the exploration of model executions resulting from different interleavings. In 41, we present a generic framework to integrate execution semantics with either implicit or explicit concurrency models, to explore the possible execution traces of conforming models, and to define strategies for helping in the exploration of the possible executions. This framework is complemented with a protocol to interact with the resulting executions and hence to build advanced concurrency analysis tools. The approach has been implemented within the GEMOC Studio. We demonstrate how to integrate two representative concurrent meta-programming approaches (MoCCML/Java and Henshin), which use different paradigms and underlying foundations to define an xDSML’s concurrency model. We also demonstrate the ability to define an advanced concurrent omniscient debugger with the proposed protocol. The work, thus, contributes key abstractions and an associated protocol for integrating concurrent meta-programming approaches in a language workbench, and dynamically exploring the possible executions of a model in the modelling workbench.
Software systems evolve more and more in complex and changing environments, often requiring runtime adaptation to best deliver their services. When self-adaptation is the main concern of the system, a manual implementation of the underlying feedback loop and trade-off analysis may be desirable. However, the required expertise and substantial development effort make such implementations prohibitively difficult when it is only a secondary concern for the given domain. In 49, we present ASOS, a metalanguage abstracting the runtime adaptation concern of a given domain in the behavioral semantics of a domain-specific language (DSL), freeing the language user from implementing it from scratch for each system in the domain. We demonstrate our approach on RobLANG, a procedural DSL for robotics, where we abstract a recurrent energy-saving behavior depending on the context. We provide formal semantics for ASOS and pave the way for checking properties such as determinism, completeness, and termination of the resulting self-adaptable language. We provide first results on the performance of our approach compared to a manual implementation of this selfadaptable behavior. We demonstrate, for RobLANG, that our approach provides suitable abstractions for specifying sound adaptive operational semantics while being more efficient.
Models play a significant role in Model-Driven Engineering (MDE) and metamodels are commonly transformed into code. Developers intensively rely on the generated code to build language services and tooling, such as editors and views which are also tested to ensure their behavior. The metamodel evolution between releases updates the generated code, and this may impact the developers’ additional, client code. Accordingly, the impacted code must be co-evolved too, but there is no guarantee of preserving its behavior correctness. In 50, we envision an automatic approach for ensuring code co-evolution correctness. It first aims to trace the tests impacted by the metamodel evolution before and after the code co-evolution, and then compares them to analyze the behavior of the code. Preliminary evaluation on two implementations of OCL and Modisco Eclipse projects showed that we can successfully trace the impacted tests automatically by selecting 738 and 412 tests, before and after co-evolution respectively, based on 303 metamodel changes. By running these impacted tests, we observed both behaviorally correct and incorrect code co-evolution.
Software languages have pros and cons, and are usually chosen accordingly. In this context, it is common to involve different languages in the development of complex systems, each one specifically tailored for a given concern. However, these languages create de facto silos, and offer little support for interoperability with other languages, be it statically or at runtime. In 56, we report on our experiment on extracting a relevant behavioral interface from an existing language, and using it to enable interoperability at runtime. In particular, we present a systematic approach to define the behavioral interface and we discuss the expertise required to define it. We illustrate our work on the case study of SciHook, a C++ library enabling the runtime instrumentation of scientific software in Python. We present how the proposed approach, combined with SciHook, enables interoperability between Python and a domain-specific language dedicated to numerical analysis, namely NabLab, and discuss overhead at runtime.
The notion of polyglot software development refers to the fact that most software projects nowadays rely on multiple languages to deal with widely different concerns, from core business concerns to user interface, security, and deployment concerns among many others. Many different wordings around this notion have been proposed in the literature, with little understanding of their differences. In 39, we propose a concise and unambiguous definition of polyglot software development including a conceptual model and its illustration on a well-known, open-source project. We further characterize the techniques used for the specification and operationalization of polyglot software development with a feature model, concentrating on polyglot programming. Finally, we outline the many challenges and perspectives raised by polyglot software development.
In this contexts, GraalVM and PolyNote are examples of runtimes allowing polyglot programming. However, there is a striking lack of support at design time for building and analyzing polyglot code. To the best of our knowledge, there is no uniform language-agnostic way of reasoning over multiple languages to provide seamless code analysis, since each language comes with its own form of Abstract Syntax Trees (AST). In 48, we present an approach to build a uniform yet polyglot AST over polyglot code, so that it is easier to perform global analysis. We first motivate this challenge and identify the main requirements for building a polyglot AST. We then propose a proof of concept implementation of our solutions on GraalVM’s polyglot API. On top of the polyglot AST, we demonstrate the ability to implement several polyglot-specific analysis services, namely auto-completion, consistency checking, type inference, and rename refactoring. Our evaluation on three polyglot projects taken from GitHub, and involving JavaScript and Python code, shows that we can build a polyglot AST without significant overhead. We also demonstrate the usefulness of the polyglot analysis services through the provided automation, as well as their scalability.
Pull-based Development (PbD) is widely used in collaborative development to integrate changes into a project codebase. In this model, contributions are notified through Pull Request (PR) submissions. Project administrators are responsible for reviewing and integrating PRs. In the integration process, conflicts occur when PRs are concurrently opened on a given target branch and propose different modifications for a same code part. In a previous work, we proposed an approach, called IP Optimizer, to improve the Integration Process Efficiency (IPE) by prioritizing PRs. In this work 67, we conduct an empirical study on 260 open-source projects hosted by GitHub that use PRs intensively in order to quantify the frequency of conflicts in software projects and analyze how much the integration process can be improved. Our results indicate that regarding the frequency of conflicts in software projects, half of the projects have a moderate and high number of pairwise conflicts and half have a low number of pairwise conflicts or none. Futhermore, on average 18.82% of the time windows have conflicts. On the other hand, regarding how much the integration process can be improved, IP Optimizer improves the IPE in 94.16% of the time windows and the average improvement percentage is 146.15%. In addition, it improves the number of conflict resolutions in 67.16% of the time windows and the average improvement percentage is 134.28%.
LLM for programming variability Programming variability is central to the design and implementation of software systems that can adapt to a variety of contexts and requirements, providing increased flexibility and customization. Managing the complexity that arises from having multiple features, variations, and possible configurations is known to be highly challenging for software developers. In this work, we explore how large language model (LLM)-based assistants can support the programming of variability. In 43 we report on new approaches made possible with LLM-based assistants, like: features and variations can be implemented as prompts; augmentation of variability out of LLM-based domain knowledge; seamless implementation of variability in different kinds of artefacts, programming languages, and frameworks, at different binding times (compile-time or run-time).
LLM for re-engineering variants We are interested in the following problem: given a set of variants (Java, C, SVG, UML, state charts, etc.) how to build a configurable program (a software product line aka SPL) that allows you to retrieve/derive them? For instance let us say you have three variants written in Java. What would be the Java program that can be configured to retrieve them? You can do it manually but it is error-prone and time-consuming. In 45 we explore the use of LLM and ChatGPT for this problem.
We revisit four illustrative cases of the literature where the challenge is to migrate variants written in a different formalism (UML class diagrams, Java, GraphML, statecharts). We systematically report on our experience with ChatGPT-4, describing our strategy to prompt LLMs and documenting positive aspects but also failures. We compare the use of LLMs with a state-of-the-art approach, BUT4Reuse. While LLMs offer potential in assisting domain analysts and developers in transitioning software variants into SPLs, their intrinsic stochastic nature and restricted ability to manage large variants or complex structures necessitate a semiautomatic approach, complete with careful review, to counteract inaccuracies.
End-user customization with generative AI Producing a variant of code is highly challenging, particularly for individuals unfamiliar with programming. In 42, we introduce a novel use of generative AI to aid end-users in customizing code. We first describe how generative AI can be used to customize code through prompts and instructions, and further demonstrate its potential in building end-user tools for configuring code. We showcase how to transform an undocumented, technical, low-level TikZ into a user-friendly, configurable, Web-based customization tool written in Python, HTML, CSS, and JavaScript and itself configurable. We discuss how generative AI can support this transformation process and traditional variability engineering tasks, such as identification and implementation of features, synthesis of a template code generator, and development of end-user configurators. We believe it is a first step towards democratizing variability programming, opening a path for end-users to adapt code to their needs.
Software Product Lines (SPLs) are families of systems that share common assets allowing disciplined software reuse. The adoption of SPLs practices has been shown to enable significant technical and economic benefits for the companies that employ them. However, successful SPLs rarely start from scratch. Instead, they usually start from a set of existing legacy systems that must undergo a well-defined re-engineering process.
Many approaches to conduct such re-engineering processes have been proposed and documented in the literature. This handbook is the result of the collective community expertise and knowledge acquired in conducting theoretical and empirical research also in partnership with industry. The topic discussed in this handbook is a recurrent and challenging problem faced by many companies. Conducting a reengineering process could unlock new levels of productivity and competitiveness. The chapter authors are all experts in different topics of the re-engineering process, which brings valuable contributions to the content of this handbook. Additionally, organizing the international workshop on REverse Variability Engineering (REVE) has contributed to this topic during the last decade. REVE has fostered research collaborations between Software Re-engineering and SPL Engineering (SPLE) communities. Thus, this handbook is also a result of our expertise and knowledge acquired from the fruitful discussions with the attendants of REVE. Our handbook aims to bring together into a single, comprehensive, and cohesive reference the wealth of experience and expertise in the area of re-engineering software intensive systems into SPLs. We cover the entire re-engineering life-cycle, from requirements gathering to maintenance and evolution tasks. Also, we provide future directions and perspectives.
We released the book "Handbook of Re-Engineering Software Intensive Systems into Software Product Lines". It is the result of a collective effort over the last 3 years. It underwent a rigorous and careful selection and edition process. The selected contributors are worldwide experts in their field, and all chapters were peer reviewed.
We also contributed with a chapter "Machine Learning for Feature Constraints Discovery" that provides an overview of methods and applications of automatically extracting unspecified constraints out of a software system (e.g., Linux, 3D printing models, video generator).
In software product line (SPL) engineering, feature models are the de facto standard for modeling variability. A user can derive products out of a base model by selecting features of interest. Doing it automatically, however, requires a realization model, which is a description of how a base model should be modified when a given feature is selected/unselected. A realization model then necessarily depends on the base metamodel, asking for ad hoc solutions that have flourished in recent years. In 47, we propose Greal, a generic solution to this problem in the form of (1) a generic declarative realization language that can be automatically composed with one or more base metamodels to yield a domain-specific realization language and (2) a product derivation algorithm applying a realization model to a base model and a resolved model to yield a derived product. We describe how, on top of Greal, we specialized a realization language to support both positive and negative variability, fit the syntax and semantics of the targeted language (BPMN) and take into account modeling practices at Airbus. We report on lessons learned of applying this approach on Program Development Plans based on business process models and discuss open problems.
We won a best paper at the ACM/IEEE 26th International Conference on Model-Driven Engineering Languages and Systems. link
A call to remove variability. Software variability is largely accepted and explored in software engineering and seems to have become a norm and a must, if only in the context of product lines. Yet, the removal of superfluous or unneeded software artefacts and functionalities is an inevitable trend. It is frequently investigated in relation to software bloat. In 44 we call the community on software variability to devise methods and tools that will facilitate the removal of unneeded variability from software systems. The advantages are expected to be numerous in terms of functional and non-functional properties, such as maintainability (lower complexity), security (smaller attack surface), reliability, and performance (smaller binaries).
Specializing configuration space through debloating. Numerous software systems are highly configurable through runtime options (e.g., command-line parameters). Users can tune some of the options to meet various functional and non-functional requirements such as footprint, security, or execution time. However, some options are never set for a given system instance, and their values remain the same whatever the use cases of the system. In 62, we design a controlled experiment in which the system's run-time configuration space can be specialized at compile-time and combinations of options can be removed on demand. We perform an in-depth study of the well-known x264 video encoder and quantify the effects of its specialization to its non-functional properties, namely on binary size, attack surface, and performance while ensuring its validity. Our exploratory study suggests that the configurable specialization of a system has statistically significant benefits on most of its analysed non-functional properties, which benefits depend on the number of the debloated options. While our empirical results and insights show the importance of removing code related to unused run-time options to improve software systems, an open challenge is to further automate the specialization process.
Software engineers are acutely aware that the building of software is an essential but resource-intensive step in any software development process. This is especially true when building large systems or highly configurable systems whose vast number of configuration options results in a space explosion in the number of versions that should ideally be built and evaluated. Linux is precisely one such large and highly configurable system with thousands of options that can be combined. A previous study showed the benefit of incremental build, however, only on small-sized configurable software systems, unlike Linux. In 78, we show preliminary results of our ongoing work on enabling efficient exploration of the Linux configuration space with incremental builds. Although incremental compilation for post-commit is used in Linux, we show that the build of large numbers of random Linux configurations does not benefit from incremental build. Thus, we introduce and detail PyroBuildS, our new approach to efficiently explore, with incremental builds, the very large configuration space of Linux. Very much like fireworks, PyroBuildS starts from several base configurations ("rockets") and generates mutated configurations ("sparks") derived from each of the base ones. This enables exploring the configuration space with an efficient incremental build of the mutants, while keeping a good amount of diversity. We show on a total of 2520 builds that our PyroBuildS approach does trigger synergies with the caching capabilities of Make, hence significantly decreasing builds time with gains up to 85%, while having a diversity of 33% of options and 15 out of 17 subsystems. Overall, individual contributors and continuous integration services can leverage PyroBuildS to efficiently augment their configuration builds, or reduce the cost of building numerous configurations.
Deep software variability refers to the interaction of all external layers (hardware, operating system, compiler, versions, etc.) modifying the behavior of software. Configuring software is a powerful means to reach functional and performance goals of a system, but many layers of variability can make this difficult.
One dimension of the problem is of course that performance depends on the input data: e.g., a video as input to an encoder like x264 or a file fed to a tool like xz . To achieve good performance, users should therefore take into account both dimensions of (1) software variability and (2) input data. In 37 we detail a large study over 8 configurable systems that quantifies the existing interactions between input data and configurations of software systems. The results exhibit that (1) inputs fed to software systems can interact with their configuration options in non-monotonous ways, significantly impacting their performance properties (2) input sensitivity can challenge our knowledge of software variability and question the relevance of performance predictive models for a field deployment. Given the results of our study, we call researchers to address the problem of input sensitivity when tuning, predicting, understanding, and benchmarking configurable systems.
Owing to the significance of the input-configuration interplay, we propose solutions and methods to address the problem. In 38, we empirically evaluate how supervised and transfer learning methods can be leveraged to efficiently learn performance models based on configuration options and input data. Our study over 1,941,075 data points empirically shows that measuring the performance of configurations on multiple inputs allows one to reuse this knowledge and train performance models robust to the change of input data. To the best of our knowledge, this is the first domain-agnostic empirical evaluation of machine learning methods addressing the input-aware performance prediction problem.
This line of work is a fruitful collaboration between Simula and Inria through RESIST-EA associate team RESIST
Business processes have to manage variability in their execution, e.g., to deliver the correct building permit in different municipalities.
This variability is visible in event logs, where sequences of events are shared by the core process (building permit authorisation) but may also be specific to each municipality.
To rationalise resources (e.g., derive a configurable business process capturing all municipalities' permit variants) or to debug anomalous behaviour,
it is mandatory to identify to which variant a given trace belongs. Manually providing this whole mapping is labour-intensive.
102 experimented with variant-based mapping using supervised machine learning (ML) to identify the variants responsible of the production of a given execution trace,
and demonstrated that recurrent neural networks (RNNs) work well (
Building on this idea of changing the perspective of the addressed problems and representation in variability; 60 discusses the differences in practices regarding feature engineering in the ML community and in the software variability community. While initiatives in applying ML models to software variability has increased, we noticed that the representation space in which ML models work and the ones used in software variability differ. ML models like their representation spaces to be continuous and differentiable while software variability practitioners usually work on the features used to describe software configurations. These features can be heterogeneous (i.e., some may be numerical values like integers or float values, while other may be Boolean) preventing the space from being continuous and requiring being extra-careful when using ML models since they may have trouble coping with heterogeneity. 60 discusses the fact that to be able to use deep learning models in the world of software variability, we need to think differently to create a representation space that is continuous and derivable but at the cost of interpretability; or we stick to machine learning models that are less efficient but on which we can have a better control and better understanding.
With the advent of fast software evolution and multistage releases, temporal code analysis is becoming useful for various purposes, such as bug cause identification, bug prediction or code evolution analysis. Temporal code analyses can consist in analyzing multiple Abstract Syntax Trees (ASTs) extracted from code evolutions, e.g. one AST for each commit or release. Core feature to temporal analysis is code differencing: the computation of the so-called Diff or edit script between two given versions of the code. However, jointly analyzing and computing the difference on thousands versions of code faces scalability issues. Mainly because of the cost of: 1) parsing the original and evolved code in two source and target ASTs; 2) wasting resources by not reusing intermediate computation results that can be shared between versions. In 55, we detail a novel approach based on time-oriented data structures that makes code differencing scale up to large software codebases. In particular, we leverage on the HyperAST, a novel representation of code histories, to propose an incremental and memory efficient approach by lazifying the well known GumTree diffing algorithms, a mainstream code differencing algorithm and tool. We evaluated our approach on a curated list of 19 large software projects and compared it to GumTree. Our approach outperforms it in scalability both in time and memory. We observed an order-of-magnitude difference: 1) in CPU time from x1.2 to x12.7 for the total time of diff computation and up to x226 in intermediate phases of the diff computation, and 2) in memory footprint of x4.5 per AST node. The approach produced 99.3% of identical diffs with respect to GumTree.
In 26, we present BURST, a benchmarking platform for uniform random sampling techniques. With BURST, researchers have a flexible, controlled environment in which they can evaluate the scalability and uniformity of their sampling. BURST comes with an extensive — and extensible — benchmark dataset comprising 128 feature models, including challenging, real-world models of the Linux kernel. BURST takes as inputs a sampling tool, a set of feature models and a sampling budget. It automatically translates any feature model of the set in DIMACS and invokes the sampling tool to generate the budgeted number of samples. To evaluate the scalability of the sampling tool, BURST measures the time the tool needs to produce the requested sample. To evaluate the uniformity of the produced sample, BURST integrates the state-of-the-art and proven statistical test Barbarik. We envision BURST to become the starting point of a standardisation initiative of sampling tool evaluation. Given the huge interest of research for sampling algorithms and tools, this initiative would have the potential to reach and crosscut multiple research communities including AI, ML, SAT and SPL.
Obtaining a relevant dataset is central to conducting empirical studies in software engineering. However, in the context of mining software repositories, the lack of appropriate tooling for large scale mining tasks hinders the creation of new datasets. Moreover, limitations related to data sources that change over time (e.g., code bases) and the lack of documentation of extraction processes make it difficult to reproduce datasets over time. This threatens the quality and reproducibility of empirical studies. In 74, we propose a tool-supported approach facilitating the creation of large tailored datasets while ensuring their reproducibility. We leveraged all the sources feeding the Software Heritage append-only archive which are accessible through a unified programming interface to outline a reproducible and generic extraction process. We propose a way to define a unique fingerprint to characterize a dataset which, when provided to the extraction process, ensures that the same dataset will be extracted. We demonstrate the feasibility of our approach by implementing a prototype. We show how it can help reduce the limitations researchers face when creating or reproducing datasets.
Web browsers have come a long way since their inception, evolving from a simple means of displaying text documents over the network to complex software stacks with advanced graphics and network capabilities. As personal computers grew in popularity, developers jumped at the opportunity to deploy cross-platform games with centralized management and a low barrier to entry. Simply going to the right address is now enough to start a game. From text-based to GPU-powered 3D games, browser gaming has evolved to become a strong alternative to traditional console and mobile-based gaming, targeting both casual and advanced gamers. Browser technology has also evolved to accommodate more demanding applications, sometimes even supplanting functions typically left to the operating system. Today, websites display rich, computationally intensive, hardware-accelerated graphics, allowing developers to build ever-more impressive applications and games. In this work 57, we present the evolution of browser gaming and the technologies that enabled it, from the release of the first text-based games in the early 1990s to current open-world and game-engine-powered browser games. We discuss the societal impact of browser gaming and how it has allowed a new target audience to access digital gaming. Finally, we review the potential future evolution of the browser gaming industry.
In many situations, it is of interest for authentication systems to adapt to context ( e.g., when the user’s behavior differs from the previous behavior). Hence, representing the context with appropriate and well-designed models is crucial. In 27, we provide a comprehensive overview and analysis of research work on Context Modelling for Adaptive Authentication systems (CM4AA) . To this end, we pursue three goals based on the Systematic Mapping Study (SMS) and Systematic Literature Review (SLR) research methodologies. We first present a SMS to structure the research area of CM4AA ( goal 1 ). We complement the SMS with a SLR to gather and synthesise evidence about context information and its modelling for adaptive authentication systems ( goal 2 ). From the knowledge gained from goal 2, we determine the desired properties of the context information model and its use for adaptive authentication systems ( goal 3 ). Motivated to find out how to model context information for adaptive authentication, we provide a structured survey of the literature to date on CM4AA and a classification of existing proposals according to several analysis metrics. We demonstrate the ability of capturing a common set of contextual features that are relevant for adaptive authentication systems independent from the application domain. We emphasise that despite the possibility of a unified framework, no standard for CM4AA exists.
Adaptive systems manage and regulate the behavior of devices or other systems using control loops to automatically adjust the value of some measured variables to equal the value of a desired set-point. These systems normally interact with physical parts or operate in physical environments, where uncertainty is unavoidable. Traditional approaches to manage that uncertainty use either robust control algorithms that consider bounded variations of the uncertain variables and worst-case scenarios, or adaptive control methods that estimate the parameters and change the control laws accordingly. In this work 35 we propose to include the sources of uncertainty in the system models as first-class entities using random variables, in order to simulate adaptive and control systems more faithfully, including not only the use of random variables to represent and operate with uncertain values, but also to represent decisions based on their comparisons. Two exemplar systems are used to illustrate and validate our proposal.
Open-source software supply chain attacks aim at infecting downstream users by poisoning open-source packages. The common way of consuming such artifacts is through package repositories and the development of vetting strategies to detect such attacks is ongoing research. Despite its popularity, the Java ecosystem is the less explored one in the context of supply chain attacks. In this work 36, we study simple-yet-effective indicators of malicious behavior that can be observed statically through the analysis of Java bytecode. Then we evaluate how such indicators and their combinations perform when detecting malicious code injections. We do so by injecting three malicious payloads taken from real-world examples into the Top-10 most popular Java libraries from libraries.io. We found that the analysis of strings in the constant pool and of sensitive APIs in the bytecode instructions aids in the task of detecting malicious Java packages by significantly reducing the information, thus, making also manual triage possible.
In this context of Supply chain attacks on open-source projects, recent work systematized the knowledge about such attacks and proposed a taxonomy in the form of an attack tree 51. We propose a visualization tool called Risk Explorer 36 for Software Supply Chains, which allows inspecting the taxonomy of attack vectors, their descriptions, references to real-world incidents and other literature, as well as information about associated safeguards. Being open-source itself, the community can easily reference new attacks, accommodate for entirely new attack vectors or reflect the development of new safeguards. This tool is also available online 1
Current software supply chains heavily rely on open-source packages hosted in public repositories. Given the popularity of ecosystems like npm and PyPI, malicious users started to spread malware by publishing open-source packages containing malicious code. Recent works apply machine learning techniques to detect malicious packages in the npm ecosystem. However, the scarcity of samples poses a challenge to the application of machine learning techniques in other ecosystems. Despite the differences between JavaScript and Python, the open-source software supply chain attacks targeting such languages show noticeable similarities (e.g., use of installation scripts, obfuscated strings, URLs). In this work 52, we present a novel approach that involves a set of language-independent features and the training of models capable of detecting malicious packages in npm and PyPI by capturing their commonalities. This methodology allows us to train models on a diverse dataset encompassing multiple languages, thereby overcoming the challenge of limited sample availability. We evaluate the models both in a controlled experiment (where labels of data are known) and in the wild by scanning newly uploaded packages for both npm and PyPI for 10 days. We find that our approach successfully detects malicious packages for both npm and PyPI. Over an analysis of 31,292 packages, we reported 58 previously unknown malicious packages (38 for npm and 20 for PyPI), which were consequently removed from the respective repositories.
The increasing popularity of certain programming languages has spurred the creation of ecosystem-specific package repositories and package managers. Such repositories (e.g., npm, PyPI) serve as public databases that users can query to retrieve packages for various functionalities, whereas package managers automatically handle dependency resolution and package installation on the client side. These mechanisms enhance software modularization and accelerate implementation. However, they have become a target for malicious actors seeking to propagate malware on a large scale. In this work 53, we show how attackers can leverage capabilities of popular package managers and languages to achieve arbitrary code execution on victim machines, thereby realizing open-source software supply chain attacks. Based on the analysis of 7 ecosystems, we identify 3 install-time and 4 runtime techniques, and we provide recommendations describing how to reduce the risk when consuming third-party dependencies. We will provide proof-of-concepts that demonstrate the identified techniques. Furthermore, we describe evasion strategies employed by attackers to circumvent detection mechanisms.
Serverless is a trending service model for cloud computing. It shifts a lot of the complexity from customers to service providers. However, current serverless platforms mostly consider the provider's infrastructure as homogeneous, as well as the users' requests. This limits possibilities for the provider to leverage heterogeneity in their infrastructure to improve function response time and reduce energy consumption. We propose a heterogeneity-aware serverless orchestrator for private clouds that consists of two components: the autoscaler allocates heterogeneous hardware resources (CPUs, GPUs, FPGAs) for function replicas, while the scheduler maps function executions to these replicas. Our objective is to guarantee function response time, while enabling the provider to reduce resource usage and energy consumption. This work 54 considers a case study for a deepfake detection application relying on CNN inference. We devised a simulation environment that implements our model and a baseline Knative orchestrator, and evaluated both policies with regard to consolidation of tasks, energy consumption and SLA penalties. Experimental results show that our platform yields substantial gains for all those metrics, with an average of 35% less energy consumed for function executions while consolidating tasks on less than 40% of the infrastructure's nodes, and more than 60% less SLA violations.
We also demonstrated that by proactively caching functions from the same application using adequate storage on the same nodes, we seek to minimize cold starts and data movement to improve total response times. We evaluate our platform in a simulation environment using workload traces derived from Microsoft’s Azure Functions, enriched with measurements from a deepfake detection project at the B<>com Institute of Research and Technology 79.
Currently, it is very hard for companies driven by personal data to make their applications GDPR-compliant, especially if those applications were developed before the GDPR was established. In 59, we present rgpdOS, a GDPR-aware operating system that aims to bring GDPR-compliance to every application, while requiring minimal changes to application code.
Time-to-market and continuous improvement are key success indicators to deliver for Industry 4.0 Cyber-Physical Systems (CPSs). There is thus a growing interest in adapting DevOps approaches coming from software systems to CPSs. However, CPSs are made not only of software but also of physical parts that need to be monitored at runtime. In 46, we claim that Model-Driven Engineering can facilitate DevOps for CPSs by automatically connecting a CPS design model to its runtime monitoring, in the form of a digital twin.
Gunter Mussbacher has an Inria International Chair, and he is visiting the DiverSE team 4 months per year.
Paul Temple visited University of NAMUR in April, July, September, October, December 2023.
Mathieu Acher visited KTH in SWEDEN in June 2023.
Jean-Marc Jézéquel visited University of Montreal in september 2023.
HiPEAC project on cordis.europa.eu
The objective of HiPEAC is to stimulate and reinforce the development of the dynamic European computing ecosystem that supports the digital transformation of Europe. It does so by guiding the future research and innovation of key digital, enabling, and emerging technologies, sectors, and value chains. The longer term goal is to strengthen European leadership in the global data economy and to accelerate and steer the digital and green transitions through human-centred technologies and innovations. This will be achieved via mobilising and connecting European partnerships and stakeholders to be involved in the research, innovation and development of computing and systems technologies. They will provide roadmaps supporting the creation of next-generation computing technologies, infrastructures, and service platforms.
The key aim is to support and contribute to rapid technological development, market uptake and digital autonomy for Europe in advanced digital technology (hardware and software) and applications across the whole European digital value chain. HiPEAC will do this by connecting and upscaling existing initiatives and efforts, by involving the key stakeholders, and by improving the conditions for large-scale market deployment. The next-generation computing and systems technologies and applications developed will increase European autonomy in the data economy. This is required to support future hyper-distributed applications and provide new opportunities for further disruptive digital transformation of the economy and society, new business models, economic growth, and job creation.
The HiPEAC CSA proposal directly addresses the research, innovation, and development of next generation computing and systems technologies and applications. The overall goal is to support the European value chains and value networks in computing and systems technologies across the computing continuum from cloud to edge computing to the Internet of Things (IoT).
Benoit Combemale and Gunter Mussbacher have been General co-chairs of ICT4S 2023 70, 71, organized at the University of Rennes, France.
Olivier Barais and Djamel Eddine Khelladi have co-organized "Les Journées du GDR GPL 2023".
Arnaud Blouin has co-organized the:
Djamel Eddine Khelladi has co-organized the Models and Evolution (ME) workshop at MODELS.
Stéphanie Challita has been Student Volunteers Co-Chair at ICT4S'2023.
Quentin Perez has been Student Virtual Chair at ICT4S'2023.
Johann Bourcier has been Publicity chair at ICT4S'2023
Arnaud Blouin has been a member of the following PCs:
Olivier Barais has been a member of the following PCs:
Benoit Combemale has been a member of the following PCs:
Olivier Zendra has been a member of the PC of the 30th CE&SAR conference, CE&SAR 2023 by DGA, at ECW 2023.
Jean-Marc Jézéquel has been a member of the following PCs:
Djamel Eddine Khelladi has been a member of the following PCs:
Arnaud Blouin has served as an external reviewer for Interact 2023.
Benoit Combemale is Editor-in-Chief of the Springer-Nature International Journal on Software and Systems Modeling (SoSyM) 34, 29, 28, 32, 31, 33, 30. He is also member of the Editorial Boards of the Springer Software Quality Journal (SQJ), the platinum open access JOT journal (former deputy editor-in-chief of the journal, 2020-2023), and the Elsevier Journal of Computer Languages (COLA).
Jean-Marc Jézéquel has been Associate Editor in Chief of IEEE Computer and of SoSYM, as well as a member of the EB of JSS.
Stéphanie Challita has been Assistant Editor of the Journal of Software and Systems Modeling, Springer (SoSyM).
Djamel Eddine Khelladi is the guest editor in the special issue on Model Driven Engineering for Digital Twins in the the Journal of Software and Systems Modeling(SoSyM).
Arnaud Blouin has served as an external reviewer for IEEE Transactions of Software Engineering.
Stéphanie Challita has been a reviewer at Annals of Telecommunications and SoSyM.
Olivier Barais has been a reviewer at SoSyM.
Benoit Combemale has served as an external reviewer for IEEE Transactions of Software Engineering, and ACM Transactions on Software Engineering and Methodology.
Djamel Eddine Khelladi has served as an external reviewer for the Journal on Software and Systems Modeling (SoSyM) and IEEE Transactions of Software Engineering, and ACM Transactions on Software Engineering and Methodology.
Benoit Combemale gave a talk entitled "Expériences et défis scientifiques des jumeaux numériques." at the network Aristote (21/09/23).
Arnaud Blouin: Founding member and co-organiser of the French GDR-GPL research action on Software Engineering and Human-Computer Interaction (GL-IHM).
Jean-Marc Jézéquel has been Vice-President of Informatics Europe, and elected as the new President starting 2024.
Olivier Zendra is:
Benoit Combemale is a founding member of the GEMOC initiative, an international effort to develop techniques, frameworks, and environments to facilitate the creation, integration, and automated processing of heterogeneous modeling languages. He is currently the scientific leader of the Research Consortium GEMOC at the Eclipse Foundation.
Benoit Combemale is also a member of the steering committees of the ACM/IEEE Intl. Conference on Model-Driven Engineering Languages and Systems (member since 2023), the ACM SIGPLAN Intl. Conference on Software Language Engineering (member since 2014, and chair of the steering committee from 2018 to 2022), the Intl. Conference on Information and Communications Technology for Sustainability (member since 2022), the Modeling Language Engineering and Execution (MLE) workshop (founding member, since 2019) and the Model-Driven Engineering of Digital Twins (ModDiT) workshop (founding member, since 2021).
Arnaud Blouin: expert for the CIR agency (research tax credit, "crédit d'impôt recherche").
Olivier Barais: expert for the following call for projects:
Olivier Barais: member of the scientific board of Pole de compétitivité Image et Réseau
Stéphanie Challita has been a member of the Conference Activities Committee (CAC) at IEEE Computer Society.
Olivier Zendra: scientific CIR/JEI expert for the MESR.
Johann Bourcier: expert for the CIR agency (research tax credit, "crédit d'impôt recherche") and reviewer for an ANR JCJC.
Olivier Barais is a new member of the CNU 27.
Olivier Zendra was a member of Inria Evaluation Committee (CE) till September 2023.
The DIVERSE team bears the bulk of the teaching on Software Engineering at the University of Rennes 1 and at INSA Rennes, for the first year of the Master of Computer Science
(Project Management, Object-Oriented Analysis and Design with UML, Design Patterns,
Component Architectures and Frameworks, Validation & Verification, Human-Computer Interaction, Sustainable Software Engineering)
and for the second year of the MSc in software engineering (Model driven Engineering, DevOps, DevSecOps, Validation & Verification, etc.).
Each of Jean-Marc Jézéquel, Noël Plouzeau, Olivier Barais, Benoît Combemale, Johann Bourcier, Arnaud Blouin, Aymeric Blot, Quentin Perez, Stéphanie Challita and Mathieu Acher teaches about 250h in these domains for a grand total of about 2000 hours, including several courses at IMT, ENS Rennes and ENSAI Rennes engineering school.
Olivier Barais is deputy director of the electronics and computer science teaching department of the University of Rennes 1.
Olivier Barais is the head of the Master in Computer Science at the University of Rennes 1.
Arnaud Blouin is in charge of industrial relationships for the computer science department at INSA Rennes and elected member of this CS department council.
The DIVERSE team also hosts several MSc and summer trainees every year.
Benoit Combemale was in the PhD jury (reviewer) of Hamza Bourbouh (ISAE-Supaéro), "Static analyses and model checking of mixed data-flow/control-flow models for critical systems."
Arnaud Blouin was in the PhD jury (reviewer) of Philippe Schmid (University of Lille, Inria Lille, France), "Développement d'historiques de commandes avancés pour améliorer le processus d'édition numérique"
Olivier Barais was in the PhD jury (reviewer) of Romain Fouquet (University of Lille, Inria Lille, France), "Improving Web User Privacy Through Content Blocking"
Olivier Barais was in the PhD jury (reviewer) of Santiago Bragagnolo (University of Lille, Inria Lille, France), "An Holistic Approach to Migrate Industrial Legacy Systems"
Walter Rudametkin was in the PhD jury (reviewer) of Vero Sosnovik (University of Grenoble, France), "Detection and analysis of online issue and political ads".
Walter Rudametkin was in the PhD jury (reviewer) of Anne Josiane Kouam (Institut polytechnique de Paris) Bypass frauds in cellular networks : Understanding and Mitigation
Mathieu Acher was in the PhD jury (president/examiner) of Adrien GOUGEON (Université de Rennes) "Optimizing a Dynamic and Energy Efficient Network Piloting the Electrical Grid".
Mathieu Acher was invited in the PhD jury (invited) of César Soto Valero (KTH, Sweden) "Debloating Java Dependencies".
Olivier Zendra, as a member of the HiPEAC Vision Editorial Board, contributed to the writing of the overall HiPEAC Vision 2023 63, also leading the writing of the cybersecurity chapter 69.
Olivier Zendra, as a member of the HiPEAC Vision Editorial Board, contributed to the writing of the overall HiPEAC Vision 2023 63, the focus of which is that we are in a race, both against time and with the rest of the world.
Indeed, technology never stands still. The last few years have once again seen rapid, profound changes across the world, both from the technological point of view – with impressive advances in artificial intelligence – and from the geopolitical point of view, where technology is increasingly seen as a strategic asset.
Different world regions are competing for leadership in several areas. Competition between the United States (US) and China in the technology and artificial intelligence (AI) domains is particularly fierce, and it is becoming more intense. This creates a threat to Europe, but at the same time an opportunity. The recent change
of ownership and leadership at Twitter is also a wake-up call for Europe. Many of the essential services the European society depends on run on platforms that are not controlled by Europe. This creates vulnerabilities in the event of conflict, comparable to European dependency on Russian energy. These are just the evolutions of the last year.
Change is taking place so rapidly that it is also having an impact on the HiPEAC Vision: updating it every two years is no longer sufficient to keep up with the speed of the evolution of computing systems. Therefore, from now on, there will be a HiPEAC Vision every year. The speed of the evolution has also inspired the editorial board to present the challenges of our community as six leadership races: for the “next web”, for AI, for innovative hardware solutions, for cybersecurity, for digital sovereignty, and for sustainability solutions.
Structurally, the HiPEAC Vision 2023 has two parts:
First, a set of recommendations for the HiPEAC community at large.
Second, a set of articles written by experts and grouped into six chapters
each describing one “global leadership race”.
In the the HiPEAC Vision 2023, Olivier Zendra was also specifically in charge of the Cybersecurity chapter 69, that addresses "The race for cybersecurity", in which we explain that, after decades of digitalization spreading into every area of our lives, with very little attention given to the aspects linked to cybersecurity, information technology (IT) had essentially become an “open bar” for cybercriminals. For a few years, with a marked degradation during the peak of the COVID-19 pandemic, the news has been rife with reports of privacy breaches and cyberattacks (mainly ransomware) on companies and institutions, especially local governments and hospitals. In addition, cyberwarfare has been making the news too, especially in relation to the conflict in Ukraine.
Thus, the era of blissful ignorance and naiveté
has ended. Although the wake-up call was abrupt,
knowledge of these issues has expanded, and
governments and to some extent businesses have
taken first moves to enhance the cybersecurity
frontline. However, cybersecurity is a highly competitive race between nations, between defenders and attackers, with enormous stakes. The pervasiveness of IT provides a broad attack surface, and attacks can be economically devastating, but they can also have tangible or even lethal repercussions on the physical world.
Despite several highly acclaimed advancements (e.g. the General Data Protection Regulation-GDPR), the EU still has a great deal of work to do in this regard, particularly to maintain its sovereignty and become a leader in the global competition. Cybersecurity is indeed a matter of both economic leadership and national sovereignty.
This chapter contains two contributions.
In article “From cybercrime to cyberwarfare, nobody can overlook cybersecurity any more” 68, we describe the current state of IT system cybersecurity, showing how vulnerable systems are to the numerous dangers and challenges posed by cybercrime and cyberwarfare. It goes on to present a few concrete ways to remedy the issue, whether by technical, legal, sociological, or political means. Indeed, although the EU has weaknesses, linked to its extremely high reliance on IT systems, it also has the potential to become a world leader in cybersecurity, owing to both its strong technical culture and its regulatory capabilities.
In article “Is privacy possible in a digital world?” 64, we explain that over the last few years, privacy has become a hot topic. However, this is in large part due to the fact that ever more data is being collected, not only by governments, but also by companies. It is often unclear for which purposes this data ends up being used; worse, it can even be leaked to third parties by attackers. Furthermore, even if this collected data would appear not to be sensitive in and of itself, sometimes sensitive information can be deduced from it. In this article, we present a summary of some of the ways in which data is gathered; how additional information can be inferred from it and how this is problematic; and how we can try to protect our privacy.
The challenges of effective data protection cannot be addressed solely by law. The demonstrated need for an alliance with technology has led to the project of building an operating system incorporating data protection rules because it serves as the intermediary between processing and data. Three central ideas structure the technical project (the creation of active personal data, the focus of the OS on data rather than the process, and access at the level of the data itself), suggesting an implementation of data protection rules at the level of each personal data 40. Such innovations do not fail to pose challenges, both in terms of the choices made and the translation of legal rules. However, the project involves more fundamental issues at the micro level – the modularity of personal data characteristics – as well as at the macro level – the correlative modification of the data processing ecosystem.
In the field of education, the team is applying some of its research results to develop a test scoring platform for Universities: CorrectExam. This platform is gaining users, with more than 150 exams marked on the platform. Discussions are underway to incubate the platform within the esup-portail association.
Olivier Barais gave several talks on the use of ChatGPT for education in front of a set of high-school professors and the dean.
As Professors, we introduce into our course a set of shared slides raising awareness of the academic community and associated discipline. We might regret that a laboratory like the one in Rennes is not easily open to our Masters students. This does not help in reducing the gap between academia and industry. Indeed, many students can follow university or engineering degrees without going to the laboratory.