Beyond Compliance 2024 - Speakers


 

Julian Nida-Rümelin - LMU Munich and Humanistische Hoschule Berlin

Beyond Compliance: Digital Humanism

Compliance is necessary, but not sufficient. Digital transformation is accompanied by an AI Ideology that endangers both: the humanistic essence of democracy and the technological progress. The counterpart is Digital Humanism that defends the human condition against transhumanistic transformations and animistic regressions. Humanism in ethics and politics strives at extending human authorship by formation and social policy. Digitization changes the technological conditions of human practice but does not transform humans in cyborgs or establish machines as persons. Digital humanism rejects transhumanistic an animistic perspective alike, it rejects the idea of homo deus, the human god that creates e-persons intended as friends or unintended as enemies.
In my talk I will outline the basic ideas of digital humanism and draw some ethical and political conclusions

https://julian-nida-ruemelin.com/en/


 

Milad Doueihi

Beyond Intelligence: Imaginative Computing. A Minority report.

Since the Dartmouth Summer Proposal until its most recent incarnation under the guise of Generative models, Computation has been caught in a trap that has shaped both its history as well as its reception (from the various schools of AI to the evolution of Computational Ethics not to say anything concerning the proliferation of regulatory efforts…), a history grounded in a comparative model that supposedly informs our understanding and representations of intelligence. But what if that is precisely the source of problem? What if the roads not taken (full formal learning models and their potential impact on cultural transmission in general, “imaginative thinking” to quote the Dartmouth Proposal [Paragraph 7] instead of intelligence, and the avoidance of ethics as a potential answer or solution, the quasi-religious forms of beliefs attached to the current model, etc.) point to more productive and less destructive paths? A minority view for sure, one that, despite what would appear as simply a futile effort, that calls for abandoning Intelligence and opting for more realistic and manageable alternatives.

Milad Doueihi (retired). Forthcoming: Les maîtres voraces de l’intelligence (Seuil, 2025), La rage secrète de l’étranger (Seuil) and Un vocabulaire des institutions computationnelles. Hommage à Émile Benveniste (MK Éditions, 2025).


 

Ferran Argelaguet - Inria, France

Ethical Considerations of Social Interactions in the Metaverse

META-TOO is a Horizon Europe project that aims to address gender-based inappropriate social interactions in the Metaverse by integrating neuroscience, psychology, computer science, and ethics. The project investigates how users perceive and manage virtual harassment in social VR environments, focusing on avatar characteristics, social contexts, and environmental factors. It also explores the role of perspective-taking and bystander behavior to mitigate harassment. META-TOO raises significant ethical challenges, including concerns about participant exposure, cultural differences, data privacy, and the potential for unintended consequences. This talk will discuss these ethical issues and how the project will tackle these challenges.

Ferran Argelaguet is a research scientist (CRCN) at the Hybrid Team at IRISA/Inria Rennes. He received his PhD in Computer Science from the Universitat Politècnica de Catalunya on 2011. His research activity is devoted on the field of 3D User Interfaces (3DUI) which is multidisciplinary research field involving Virtual Reality, Human Computer Interaction, Computer Graphics, Human Factors, Ergonomics and Human Perception. His research is structured under three major research axis: understand human perception in virtual reality systems, improve VR interaction methods leveraging the human perceptual and motor constraints, and enrich VR interaction by exploiting user’s mental and cognitive states.

https://team.inria.fr/hybrid/author/fargelag


 

Marianna Capasso - Utrecht University

Algorithmic Discrimination in Hiring: A Cross-Cultural Perspective

There are over 250 Artificial Intelligence (AI) tools for HR on the market. Algorithmic hiring technologies include tools like algorithms that extract information from CVs; video interviews for screening candidates; search, ranking, and recommendation algorithms; and many others. But if on one hand algorithmic hiring might increase recruitment efficiency, since it reduces costs and time related to sourcing and screening of job applicants, on the other hand it might also perpetuate discrimination and systematic disadvantages for marginalised and vulnerable groups in society. The recent case of the Amazon CV-screening system is exemplar, as the system was found to be trained on biased historical data that led to a preference for men based on the fact that, in the past, the company hired more men as software engineers than women. But what exactly makes (the use of) an algorithm discriminatory? The nature of discrimination is controversial, since there are many forms of discrimination and it is not clear whether they are all morally wrong, nor is it clear why they are morally problematic and unfair. When it comes to algorithmic discrimination, and to the question of what counts as ‘high-quality’ data to improve diversity and variability of training data, things are even more complicated. This talk aims to clarify the current state of research related to these points and provide a cross-cultural digital ethics perspective on the question of algorithmic discrimination in hiring.

Marianna Capasso (she/her) is PostDoctoral Researcher in AI Ethics at Utrecht University. At UU she works in the intercultural digital ethics team of the EU-funded FINDHR project, which deals with intersectional discrimination in algorithmic hiring. Prior to this, Marianna was PostDoctoral Researcher at Erasmus School of Philosophy of Erasmus University Rotterdam, and PostDoctoral Researcher at Sant’Anna School of Advanced Studies in Pisa, where she obtained her PhD in Human Rights and Global Politics in 2022. Her main research interests lie at the intersection of philosophy of technology and political philosophy, with a special focus on topics such as Responsibility with AI, Meaningful Human Control, and AI and the Future of Work.

https://www.uu.nl/staff/MCapasso


 

Rockwell F. Clancy - Virginia Tech

Towards a culturally responsive, psychologically realist approach to global AI (artificial intelligence) ethics

Although global organizations and researchers have worked on the development and implementation of AI, market concentration has occurred in only a few regulatory jurisdictions. As such, it is unclear whether the ethical perspectives of global populations are adequately addressed in AI technologies, research, and policies to date. Addressing these gaps, this article claims AI ethics initiatives have tended to be (1) “culturally biased,” based on narrow ethical values, principles, and frameworks, poorly representative of global populations and (2) “psychologically irrealist,” based on mistaken assumptions regarding how mechanisms of normative thought and behaviors work. Effective AI depends on responding to different ethical perspectives, but frameworks for ensuring ethical AI remain largely disconnected from empirical insights about and methods for exploring ethics empirically and culturally. A truly global approach to AI ethics depends on understanding how people actually think about issues of right and wrong and behave (psychologically realist), and how culture affects these judgments and behaviors (culturally responsive). Neither can approaches to AI ethics be culturally responsive without being psychologically realist, we claim, nor can they be psychologically realist without being culturally responsive. This paper will sketch the motivations for and nature of a psychologically realist, culturally responsive approach to global AI ethics.

Rockwell Clancy conducts research at the intersection of technology ethics, moral psychology, and China studies. He explores how culture and education affect moral judgments, the causes of unethical behaviors, and what can be done to ensure more ethical behaviors regarding technology. Central to his work are insights from and methodologies associated with the psychological sciences and digital humanities. Rockwell is a Research Scientist in the Department of Engineering Education at Virginia Tech and Chair of the Ethics Division of the American Society for Engineering Education. Before moving to Virginia, he was a Research Assistant Professor in the Department of Humanities, Arts, and Social Sciences at the Colorado School of Mines, Lecturer in the Department of Values, Technology, and Innovation, at Delft University of Technology, and an Associate Teaching Professor at the University of Michigan-Shanghai Jiao Tong University Joint Institute. Rockwell holds a PhD from Purdue University, MA from Katholieke Universiteit, Leuven, and BA from Fordham University.

http://www.rockwellfclancy.com/index.html


 

Michael Fisher - University of Manchester, UK

Responsible Autonomy

I am going to briefly talk about several dimensions of “responsibility” relating to autonomous systems.
We are increasingly developing “autonomous” systems that make their own decisions, and take their own actions, without direct human oversight. These systems often involve AI and/or Robotics. However, we must ensure that the independent decision-making in these autonomous systems can be guaranteed to be safe, ethical, and reliable. Too much development, and even deployment, fails to guarantee these aspects. In addition, if we (users) are to trust autonomous systems we need them to be constructed so that their behaviour and decisions are transparent and, crucially, their reasons for making those decisions are transparent and verifiable.

It is our role to design, develop, and deploy systems responsibly. This includes not only ensuring the task of the system is clear, but ensuring the system carries out this task both reliably and safely. Furthermore, We must be very clear what assumptions we make about the environment in which these systems are to be deployed. Often AI/Autonomous/Robotic systems are designed under significant assumptions, which are violated once the “real world” is encountered.

The final dimension I will highlight concerns sustainability, especially environmental sustainability. Clearly, developing and deploying technology is not without environmental cost. We must be clear about the environmental issues and must ensure that the deployment of the technology provides a “net positive”. These issues are obvious in the context of robot construction but the environmental costs of AI, especially data-driven machine learning, have been often overlooked. The vast environmental impact of these tools should be taken in to account before design and deployment.

Michael Fisher is a Professor of Computer Science, and Royal Academy of Engineering Chair in Emerging Technologies, at the University of Manchester. His research concerns autonomous systems, particularly verification, software engineering, self-awareness, and trustworthiness, with applications across robotics and autonomous vehicles. Increasingly, his work encompasses not just safety but broader ethical issues such as sustainability and responsibility across these (AI, Autonomous Systems, IoT, Robotics, etc) technologies.

Fisher chairs the British Standards Institution Committee on Sustainable Robotics, co-chairs the IEEE Technical Committee on the Verification of Autonomous Systems, and is a member of both the IEEE P7009 Standards committee on “Fail-Safe Design of Autonomous Systems” and the Strategy Group of the UK’s Responsible AI programme.

He is currently on secondment (for 2 days per week) to the UK Government’s Department for Science, Innovation and Technology [https://www.gov.uk/dsit] advising on issues around AI and Robotics.

https://web.cs.manchester.ac.uk/~michael


 

Nikolaus Forgo - Universität Wien

Giving an historical and critical overview on European attempts to regulate digitalisation

This presentation will give a historical and critical overview on European attemtps to regulate digitalisation consistently and convincingly. We will focus, in particular, on GDPR, Data Act, Copyright Law and the AI act. From this perspective we will assess in some more detail the interplay between AI, ethics and law and will ask whether Fundamental Rights Impact assessments are a useful tool for ethical governance of research.

Nikolaus Forgó studied law in Vienna and Paris from 1986-1990 and then worked as university assistant at the Faculty of Law at the University of Vienna. In 1997, he received his doctorate in law with a dissertation on legal theory. Since October 1998, he has been head of the university course for information and media law at the University of Vienna, which still exists today. From 2000 to 2017, he was Professor of Legal Informatics and IT Law at the Faculty of Law at Leibniz Universität Hannover, where he headed the Institute for Legal Informatics for 10 years and was also Data Protection Officer and CIO.
Since October 2017, he has been Professor of Technology and Intellectual Property Law at the University of Vienna and Director of the Department of Innovation and Digitalisation in Law at the same university. He is also an honorary expert member of the Austrian Data Protection Council and the Austrian AI Advisory Board.

https://id.univie.ac.at/en/team/univ-prof-dr-nikolaus-forgo/


 

Alexei Grinbaum - CEA-Saclay, France, and Horizon Europe iRECS project

Tutorial - Training in AI ethics: concepts, methods, exercises, problems

Alexei Grinbaum is senior research scientist at CEA-Saclay with a background in quantum information theory. He writes on ethical questions of emerging technologies, including robotics and AI. Grinbaum is the chair of the CEA Operational Ethics Committee for Digital Technologies and member of the French National Digital Ethics Committee (CNPEN). He coordinates and contributes to several EU projects and serves as Ethics Chair to the European Commission. His books include “Mécanique des étreintes” (2014), “Les robots et le mal” (2019), and “Parole de machines” (2023).


 

Attila Gyulai - HUN-REN

Misled by autonomy: AI and contemporary democratic challenges

This presentation discusses the hopes and fears regarding the impact of AI on democracy by focusing on the misunderstood role of autonomy within the democratic process. In standard democratic theory, autonomy refers to the capacity and normative requirement of self-government. It will be argued that both democratic scholarship and policy documents seem unprepared to consider the inclusion and intrusion of AI into democracy. Democratic autonomy means that the people possess the power of self-legislation; they are the authors of public norms. Autonomy therefore presupposes that the formation of preferences is free from any undue interference. It is often claimed that AI is a threat to democracy because its various applications bring about precisely this undue interference by taking over the selection and dissemination of information necessary for people’s autonomous decision-making, through algorithmic governance that limits the scope of self-governance, and by treating citizens as sources for data-driven campaigns that undermine the role of deliberation and preference formation. There is an expectation that even if AI fulfils a variety of tasks in the democratic process, the ultimate control over everything it is allowed to do must remain with and be exercised by the people themselves, based on the autonomous will of the individual. The presentation offers a critical review of democratic theory by focusing on the points at which AI enters the democratic process (AI-driven platforms, algorithmic governance, democratic oversight of decision-making, democratic preference formation, the desired consensual outcome of the democratic process) to show that AI does not threaten the autonomous self-government of the people because the latter is merely an ideal that cannot realistically be expected to ground democracy. If the untenability of this expectation is ignored, neither the real impact of AI nor the necessary measures (guidelines, principles, policy proposals) can be assessed. Based on a critical reading of the discourse, it will be argued that any attempt to reconcile AI with democracy must address the constraints of autonomy and self-governance in any democracy in order to provide meaningful responses to the challenges facing all present and future democracies.

Attila Gyulai is a senior research fellow at the HUN-REN Centre for Social Sciences, Budapest and associate professor at Corvinus University of Budapest. His research interests include realist political theory, democratic theory, the political theory of Carl Schmitt and the political role of constitutional courts. His work has been published in journals such as Journal of Political Ideologies, East European Politics, Griffith Law Review, German Law Journal, and Theoria. He is co-author of the monograph The Orban Regime – Plebiscitary Leader Democracy in the Making.

https://tk.hun-ren.hu/en/researcher/gyulai-attila


 

Natali Helberger - University of Amsterdam

AI everywhere and anytime in the media. Will the AI Act save democracy?

In my presentation I will discuss challenges and opportunities of the use of Generative AI in the media for democracy, and the role of the AI Act in creating reliable safeguards for fundamental rights and freedom of expression. 

Natali Helberger is University Professor of Information Law and Digital Technology at the University of Amsterdam and a member of the Executive Board of the Institute for Information Law (IViR). Helberger is an elected member of the Royal Holland Society of Sciences (KHMW), the Royal Netherlands Academy of Arts and Sciences (KNAW) and the Social Science Council of the KNAW. Her research on AI and automated decision systems focuses on its impact on society and governance. Helberger co-founded the Research Priority Area 'Human(e) AI' at the UvA. Helberger is also founder and co-director of the AI, Media & Democracy Lab, and since 2022 she has been scientific director of the AlgoSoc (Public Values in the Algorithmic Society) Gravity Consortium. A key focus of the Algosoc programme is to mentor and train the next generation of interdisciplinary researchers. She is a member of several national and international research groups and committees, including the Council of Europe's Expert Group on AI and Freedom of Expression and the AI Office's Working Group on creating a Code of Conduct for Generative AI. 

https://www.uva.nl/en/profile/h/e/n.helberger/n.helberger.html


 

Bjorn Kleizen - University of Antwerp

Do citizens trust trustworthy artificial intelligence? Examining the limitations of ethical AI measures in government

The increasing role of AI in our societies poses important questions for public services. On the one hand, AI provides a tool to improve public services. On the other, various AI technologies remain controversial, raising the question to what extent citizens trust public sector uses of AI. Although trust in AI and ethical AI have both become prominent research fields, it is notable that most research undertaken up until now focuses solely on the users of AI systems. We argue that, in the public sector, non-user citizens are a second vital stakeholder whose trust should be maintained. Large groups of citizens will never interact with public sector AI models that operate behind the scenes, forcing citizens to make trust evaluations based on limited information, hearsay and heuristics. Simultaneously, their attitudes will have an important impact on the legitimacy that public sector organizations have to develop and implement AI systems. Thus, unlike previous work on direct users of AI, our studies are mainly focused on the general public. We present results from 2 Belgian survey experiments and 17 semi-structured interviews conducted in Belgium and the Netherlands. Together, these studies suggest that trust among non-users is substantially less malleable than among direct users, as new information on AI projects’ trustworthiness is largely interpreted in line with pre-existing attitudes on government, privacy and AI.

Bjorn Kleizen is a postdoctoral researcher at the University of Antwerp, Department of Political Science, GOVTRUST Centre of Excellence. His work mainly focuses on the psychology of citizen-state interactions. Kleizen has previously completed projects on citizen trust in public sector AI systems, and is currently examining citizen attitudes on scandals exacerbated by public sector automation, e-government and/or AI.

https://www.uantwerpen.be/en/staff/bjorn-kleizen/research/


 

Anatole Lécuyer - Inria Rennes/IRISA

Paradoxical effects of virtual reality

Virtual reality technologies are often presented as the ultimate innovative interaction media for interacting with digital content online. When we put on a virtual reality headset for the first time, we are gripped by the power of sensory immersion. Many positive applications then come to mind, such as for health, education, training, access to cultural heritage, or teleconferencing and teleworking. But these technologies also raise fears and dangers of various kinds, whether for the physical or psychological integrity of users, or for their privacy. In this presentation, we will first review the main concepts and psychological effects associated with immersive technologies. Then, we will focus on the notion of avatar or virtual embodiment in virtual worlds, to show how these powerful effects can be used to good or bad effect, and lead to sometimes paradoxical effects that we need to be more aware of to be able to control them better in the future.

Anatole Lécuyer is Director of Research and Head of Hybrid research team, at Inria, the French National Institute for Research in Computer Science and Control, in Rennes, France. His research interests include: virtual reality, haptic interaction, 3D user interfaces, and brain-computer interfaces (BCI). He served as Associate Editor of “IEEE Transactions on Visualization and Computer Graphics”, “Frontiers in Virtual Reality” and “Presence” journals. He was Program Chair of IEEE Virtual Reality Conference (2015-2016) and General Chair of IEEE Symposium on Mixed and Augmented Reality (2017) and IEEE Symposium on 3D User Interfaces (2012-2013). He is author or co-author of more than 200 scientific publications. Anatole Lécuyer obtained the Inria-French Academy of Sciences “Young Researcher Prize” in 2013, the IEEE VGTC “Technical Achievement Award in Virtual/Augmented Reality” in 2019, and was inducted in the inaugural class of the IEEE Virtual Reality Academy in 2022.

https://people.rennes.inria.fr/Anatole.Lecuyer/


 

Anna Ujlaki - HUN-REN/Eötvös Loránd University

Regulating Artificial Intelligence: A Political Theory Perspective

In the face of unprecedented advancements in artificial intelligence (AI), this presentation explores how AI is reshaping society, politics, and the foundational values of democracy. The aim of the presentation is to provide a critical review of the discourse about the political theory of AI, highlighting the strengths and weaknesses of contemporary normative discussions. It critically investigates the discourse across four key aspects. Firstly, it addresses the conceptual questions that must be resolved before making any normative claims or judgments. Given that normative political theoretical concepts are often contested, the presentation argues that there is a path dependence in the literature, influenced by the definitions adopted for fundamental concepts. This is particularly relevant to discussions on the relationship between (liberal) democracy and AI. Secondly, from a normative perspective, the focus shifts to the norms, values, and standards we expect from the implementation of AI to certain social and political contexts, and those perceived as being threatened by its emergence. This perspective emphasizes not only the importance of values such as autonomy, transparency, human oversight, safety, privacy, and fairness in AI regulation but also those often overlooked in social scientific literature on AI, such as non-domination, vulnerability, dependency, and care, which are significant in both human–human and human–machine relationships. Thirdly, the presentation examines the potential of various political theoretical approaches, including liberal, republican, realist, and feminist perspectives, to address the challenges posed by AI. Fourthly, it considers the level of abstraction of the debate, questioning whether the normative arguments and explanations in the literature are directed at issues related to narrow AI, artificial general intelligence (AGI), or both. In conclusion, while some normative arguments, such as those concerning AI regulation, are relatively well-developed, the presentation aims to highlight the gaps in the literature, suggesting the need for further exploration of the normative framework in discussions about AI.

Anna Ujlaki is a junior research fellow at the HUN-REN Centre for Social Sciences, Budapest and she is an assistant professor at the Institute of Political and International Studies at Eötvös Loránd University. Her research focuses on the political theory of migration, political obligation, and artificial intelligence, incorporating perspectives from liberal, feminist, realist, and republican political theories.

https://annaujlaki.com/


 

Siddharth Peter de Souza - Tilburg University/Warwick University

Norm making around data governance: proposals for red lines

In my presentation, I will explore different types of proposals that can establish norms that ban illegitimate data-related practices at scale, given the global policy consensus that data must flow, and are a necessary basis for innovation. Through an study of work conducted by civil society organisations, and social movements, the presentation will discuss what kind of global red lines do we need for data to prevent its extractive and exploitative use.

Siddharth Peter de Souza is the founder of Justice Adda, a law and design social venture in India, and an incoming Assistant Professor at the University of Warwick from January 2025. He was a post-doctoral researcher at the Global Data Justice project at Tilburg University and is now an affiliated researcher.

https://www.tilburguniversity.edu/staff/s-p-desouza


 

Jean-Bernard Stefani - Inria

Taking Conviviality Seriously

In the early 1970s, Ivan Illich proposed a critical analysis of technology that appears remarkably cogent for understanding the moral and political woes that plague our current digital societies. This talk will aim to substantiate this claim and suggest potential avenues for research on convivial computing.

Jean-Bernard Stefani is a senior scientist at INRIA, the French National Research Institute in Computer Science and Control, where he has led the Sardes team on distributed systems engineering and the Spades team on formal methods for embedded systems, and was a past director of research of the INRIA Grenoble-Alpes research center. Prior to INRIA, he worked for 15 years at the French National Center for Telecommunications Research (CNET), where he led research on distributed computer systems. He is currently involved in the creation of a new research team at INRIA
on convivial computing.

https://team.inria.fr/spades/jean-bernard-stefani/


 

Melodena Stephens - Mohammed Bin Rashid School of Government

Approaching the Regulatory Event Horizon: Opportunities and Challenges

The pace of AI adoption is so rapid that the regulatory apparatus is unable to keep up. Part of the challenge is the complex regulatory process. This puts pressure on the individuals in a society and private actors to self-regulate. Further even if there are robust regulations, there are challenges with regulatory manoeuvrability and agility of governments to manage new technologies. For example, one debate in AI circles is should we regulate the technology or the industry? Another challenge is the impact of these policies. It has become fashionable for academics to suggest policy reforms as a few concluding paragraph of their journal articles but this is not enough as the process of advocacy is long and often negotiated. 

Bio: Prof. Dr. Melodena Stephens has over three decades of senior management experience across Asia, Europe, the Americas, and Africa. She consults and trains in strategy, focusing on technology governance, Science, Technology, and Innovation strategy, brand-building, agile government, and crisis management.  As Professor of Innovation & Technology Governance, she works with entities like the IEEE SA, the Council of Europe, Agile Nations, World Government Summit, World Economic Forum and senior government leaders from across the world. Her recent two books: Anticipatory Governance: Shaping a responsible Future and AI Enabled Business: A Smart Decision Kit. Melodena loves to write and blogs at www.melodena.com.


 

Rebecca Stower - KTH Royal Institute of Technology in Stockholm

Good Robots Don’t Do That: Making and Breaking Social Norms in Human-Robot Interaction

Robots are becoming increasingly present in both public and private spaces. This means robots have the potential to both shape and be shaped by human social norms and behaviours. These interactions span from inherently goal or task based to socially oriented. As such, people have different expectations and beliefs about how robots should behave during their interactions with humans. The field of human-robot interaction therefore focuses on understanding how features such as the robot’s appearance and behaviour influence people’s attitudes and behaviours towards these (social) robots.

Nonetheless, despite recent technological advances, robot failures remain inevitable. Robot failures in real-life, uncontrolled interactions are even more inevitable. With the rapid rise of large language models (LLMs) and other AI-based technologies, we are also beginning to see AI systems embedded into physical robots. Many of the potential pitfalls that have been highlighted with AI or virtual assistants apply equally to robots as well. When designing social robots, it is imperative that we ensure they do not reinforce or perpetuate harmful stereotypes or behaviours. In this talk, I will cover how and why different kinds of robot failures occur, and how we can use our understanding of these failures to work towards the design of more responsible and ethical social robots.

Rebecca Stower is a postdoctoral researcher at the Division of Robotics, Perception, and Learning at KTH. Her background is in experimental and social psychology. She uses psychological theories and measurement to inform the design, development, and testing of various robots, including humanoid social robots, drones, and robot arms. Her research focuses on human-robot-interaction (HRI), and especially what happens when robots fail and how this influences factors such as trust and risk-taking. More generally she is passionate about open science and psychological measurement within HRI.

https://becbot.github.io/


 

Elias Fernández Domingos - VUB Brussels, Belgium

Delegation to AI Agents

The important contributions of Elinor Ostrom have identified several mechanisms that enable the correct management of local commons (e.g., community monitoring, punishment, institutions, voting). These mechanisms provide a social barrier to support sustainable decisions and prevent those that will have a future negative effect on society. Nevertheless, the spread of intelligent systems and artificial intelligence (AI) has affected significantly, not only the way humans acquire and share information, but also the way we make informed decisions regarding critical social questions such as climate action, sustainability, or compliance with health measures. In this talk, I will introduce the key factors that differentiate delegation to AI from delegation to other human beings, and highlight both the challenges and the potential opportunities that a hybrid human-AI society offers for solving important societal issues. I will close the talk with the results of an experiment that shows the potential of delegation to AI as a commitment device that can enable pro-social behaviours.

Elias Fernández Domingos is currently a Senior Researcher (FWO fellow) at the VUB – Brussels, Belgium. He is interested in the origins of cooperation in social interactions and how can we maintain it in an increasingly complex and hybrid human-AI world. In his research, he applies concepts and methods from (Evolutionary) Game Theory, Behavioural Economics, and Machine Learning to model collective (strategic) behaviour and validates it through behavioural economic experiments. He is the creator of EGTtools, a Python/C++ toolbox for Evolutionary Game Theory.

https://ai.vub.ac.be/team/elias-fernandez/