Conclusions 2022
Synopsis
The Venice Commission of the Council of Europe organised the 19th European Conference of Electoral Management Bodies (EMBs) in Strasbourg and online on 14-15 November 2022.
The topic of the Conference was “Artificial intelligence and electoral integrity”. The participants discussed more specifically four issues, after an introductory session on the Council of Europe's acquis and the principles at stake:
- Artificial intelligence (AI) and fairness in electoral processes;
- The impact of AI on turnout and voter choice vs. data protection;
- AI vs. supervision and transparency of electoral processes;
- AI and harmful content.
Srdjan Darmanovic, President of the Council for Democratic Elections, Member of the Venice Commission from Montenegro, opened the Conference.
Around 130 participants took part in the Conference, representing national EMBs and other profiles such as academics, practitioners, experts and civil society representatives.
Other Council of Europe’s institutions, in particular, the Parliamentary Assembly and the Congress of Local and Regional Authorities participated in the Conference. Other international institutions also took part, in particular, the Organisation for Security and Co-operation in Europe/Office for Democratic Institutions and Human Rights (OSCE/ODIHR) and the European Parliament.
Conclusions
The Council of Europe has extensively worked in the field of information and communication technologies (ICT) in electoral processes, such as the Committee of Ministers (see for instance their 2022 Guidelines on the use of information and communication technologies (ICT) in electoral processes in Council of Europe member States). Moreover, the Venice Commission and the EMBs have already had several opportunities to address new technologies in elections, in particular, on the occasion of the EMB conference held in Oslo in 2018 on “security in elections” and above all, through the elaboration of “Principles for a fundamental rights-compliant use of digital technologies in electoral processes” in 2020.
Beyond ICTs, AI systems also require full compliance with the principles of democratic elections and referendums. In this respect, the on-going work of the Committee on Artificial Intelligence of the Council of Europe and its aim to elaborate a legally binding framework on the development, design and application of artificial intelligence, to be delivered by the end of 2023, is of crucial importance. Such general legal principles should be translated into appropriate domestic legislation and in conformity with the principles of the European electoral heritage.
Considering the various understandings of AI, the Conference agreed on retaining the definition given by the Council of Europe and the Alan Turing Institute, i.e. “algorithmic models that carry out cognitive and perceptual functions in the world that were previously reserved for thinking, judging, and reasoning human beings.”
Whatever the positive or negative impact of AI systems on electoral processes, EMBs have nonetheless started in a number of countries using AI in various phases of their electoral processes, such as redistricting, voter registration, test and certification of voting equipment, signature matching, vote count or verification of election results.
Considering the fundamental rights at stake, EMBs and electoral stakeholders as a whole will have to carefully consider the introduction of AI systems in electoral processes and find a balance between the traditional ways of holding elections and the introduction of such systems into their processes.
This does not prevent the use of hybrid solutions involving both humans and AI, keeping in mind that complexity is an enemy of electoral integrity and that AI could contribute to damaging electoral integrity. Moreover, use of AI must always grant security and accessibility for citizens.
Regarding AI and fairness in electoral processes, a drawback of AI tools may be the risk of misusing them with the purpose of manipulating ideas and messages, creating a selective exposure of voters to politically-oriented information and consequently distorting information and reality.
In this context, EMBs, which are on the front line in ensuring the fairness of an electoral process, must be aware of, and seek to prevent, the misuse of such tools during the electoral process in order to protect voters, in particular, women and vulnerable groups.
AI can, however, contribute to a better-balanced media content, help in identifying biased information and detecting harmful content as well as providing an alternative coverage (for instance with bots detecting known misinformation).
Regarding the impact of AI on turnout and voter choice vs. data protection, AI should aim at increasing the number of better informed voters, which would ensure a higher turnout and voter inclusion. AI could also help optimising voter movement or better understanding the mechanics of voter behaviour.
There are two points of view regarding the impact of AI on turnout and voter choice: some promote AI in electoral processes through personalised advertising as a legitimate venue for conveying voter information and promoting voter education (including through micro-targeting campaigns). Others argue that the use of AI should always be restricted in electoral processes as they could also be a way of increasing voter manipulation, including through micro-targeting, and could represent a potential danger of manipulating and narrowing the political offer.
The objective is in fine to guarantee fair electoral processes by enabling voters to form and express their votes in an informed and free manner.
The conference also addressed the issue of data protection. The Cambridge Analytica scandal and other similar situations that have occurred as well as the exponential development of micro-targeting, have seriously endangered citizens’ personal data protection and the overall trust in electoral processes. States should have dealt with this by a strong reaction and legal measures.
Data being the raw material of AI, customisation of the information according to the personal preferences of citizens must have limits in order to ensure data protection. Considering that most data related to elections is considered as sensitive, personal data therefore requires an appropriate level of protection, especially in a context of AI-affected infrastructures putting at stake cybersecurity and in fine the overall safety of any electoral process. Use of personal data should at any rate comply with the Council of Europe’s Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data (Convention 108+) and the European Union’s General Data Protection Regulation (GDPR) for the concerned member States.
Voters should give their consent for data processed by any electoral stakeholder, private actors included, and be accordingly informed about data processing, including basic information about the body controlling such data, the purpose of data processing, the storage etc.
Additionally, all personal data that machine learning uses must be anonymised. Furthermore, the balance between detailed publicity of anonymous results and segmentation must be considered as well, while assuring the respect for secret suffrage.
Regarding AI vs. supervision and transparency of electoral processes, Tech Giants have a major responsibility to contribute to the proper conduct of electoral processes. They are accountable regarding the content of their platforms and how such content impacts public democratic discourse. They must explain in a transparent and comprehensible way which measures their in-house regulations comprise and should demonstrate that the data they are using is unbiased and representative.
A further contribution to transparency of electoral processes is the obligation to label content generated by AI (synthetic media). The European Commission's regulatory proposal on AI already envisages a corresponding labelling obligation. The obligation to record and store the data generated by the use of AI systems could also be added, as well as the right to access relevant records.
A democratic society should, however, not leave this essential task solely to private actors and according to their individual set of rules. The public actors should first discuss and decide whether AI is going to be used in electoral processes. Second, they should write the requirements AI should fulfil and define the mechanisms able to effectively control that AI fulfils such requirements. They should also supervise its use and have mechanisms in place to detect, contest and correct possible problems.
Above all, electoral stakeholders should certainly be involved in the safety of electoral processes by ensuring supervision of AI-impacted infrastructures. Any AI system affecting citizens, especially when such system takes decisions affecting human rights and fundamental freedoms, has to be open to their scrutiny. This should, consequently, allow enough time for the relevant stakeholders to analyse such mechanisms while guaranteeing the stability of systems which should not be changed in the course of the process. This also concerns the citizens’ right to know they are interacting with an AI system.
Additionally, domestic and international election observers should have a role in observing AI-affected electoral processes and participating in the transparency of such processes.
In addition to the necessity of a reinforced legal framework, potential measures that might be taken to counter the possible risks in the use of AI systems in electoral processes were discussed, including adopting various ethical and technical measures; the establishment of independent boards consisting of experts and representatives of civil society; the increased collaboration of Tech Giants with researchers for a better understanding of the risks and consequences.
Regarding AI and harmful content, AI is often used to spread online harmful content, more precisely disinformation, misinformation, hate speech, fake news and deep fakes (i.e. images, videos or audio files manipulated by AI) that blur the lines between reality and fiction.
AI systems are therefore being increasingly used as part of risk-management strategies, such as “electoral content moderation” to remove harmful content. Some systems have been criticised for often being opaque and unaccountable, such as the decision as to why some content is removed and some is not.
It is thus advisable that the decisions be supervised by humans or at least appealable to the EMB or the relevant, possibly judicial, body. As social media sites and platforms should implement domestic electoral legislation and judicial decisions, opportunities should also be provided to strengthen their role and resources and to institutionalise collaboration between EMBs and such stakeholders.
While the use of AI in electoral processes raises the responsibility of private actors to tackle harmful content, public authorities and civil society also have an essential role to monitor such content on political platforms. Moreover, the relevant public authorities have a crucial role in warning citizens in the case of harmful content and limit the dissemination of such content as well as to sanction such violations when the law allows.