by Yasin Ghafourian (Research Studios Austria), Markus Tauber (Research Studios Austria), Germar Schneider, and Andrea Bannert (Infineon Technologies Dresden GmbH & Co. KG), Olga Kattan (Philips), and Erwin Schoitsch (Austrian Institute of Technology)

To promote AI adoption in industrial cyber-physical systems (CPS) within Industry 4.0 and 5.0, it’s crucial to develop tools that support the AI lifecycle and address knowledge gaps in standards among developers. Tailored guidance is needed to ensure AI solutions in safety-critical industries are trustworthy and ethically compliant. Given the fragmented standardisation landscape for CPS, this paper proposes an ethical compliance checklist and a self-assessment tool using large language models (LLMs) to help users navigate standards, close knowledge gaps, and ensure human-centred, legally compliant AI applications.

IoT and AI significantly enhance industry competitiveness, success, and sustainability by enhancing supply chain agility and fostering environmental awareness. However, while digitalisation offers numerous advantages, integrating AI into industrial processes, especially in Europe, raises ethical and legal concerns, posing risks for individuals and society [1].

Beyond technical aspects like AI’s trustworthiness and reliability, addressing ethical considerations is crucial when introducing AI into industry. Unfortunately, unlike the transparent impact of automation on workplaces, there are few internal regulations governing AI’s ethical handling within industrial companies. The problem with the use of AI is that algorithmic decisions are typically opaque. One example is algorithms, which make it possible to digitise tasks within quality control without the need for humans to make important decisions. This would result in jobs being cut or decisions being made that are based solely on algorithms and could lead to errors if these algorithms were wrong or the environment were to change.

The opaque use of data in AI can compromise fairness and accuracy in employee performance appraisals, especially when sensors like cameras monitor workers in production environments. To prevent these issues, AI developers should rigorously evaluate their systems using a comprehensive checklist, allowing them to inform management and work councils. This ensures employee safety and well-being through timely and appropriate actions, including prompt company agreements.

Ethical Compliance Checklist
Given the challenge that AI often enters production invisibly through various projects, it is crucial to define company agreements on the use of AI at the earliest stages. Additionally, the ethical compliance checklist should incorporate mechanisms that identify and address knowledge gaps in awareness about these agreements. By doing so, the checklist can offer tailored explanations or resources to ensure that all stakeholders, regardless of their initial knowledge level, fully comprehend the ethical implications of AI use in their projects.

There are already many national and EU laws, guidelines and standards in place. Some examples include the General Data Protection Regulations, ISO/IEC JTC 1/SC 42/WG 3 on trustworthiness, the Assessment List for Trustworthy Artificial Intelligence (ALTAI)[L1], the General Equal Treatment Act, the Works Constitution Act, the Working Hours Act, the Occupational Health and Safety Act, and the Occupational Safety Act [L2] [L3].

However, these laws alone are not sufficient for companies if they cannot demonstrate that AI methods used in various projects or directly in production might result in privacy breaches and/or violation of an existing legal framework. In general, health and safety issues are subject to co-determination in companies.

The idea is to use a checklist at the start of the project, during the first milestone, to identify key concerns. If necessary, this allows for defining measures with employees to prevent occupational health and safety issues and to create human-centred workplaces where AI is used for human benefit.

An initial version of the checklist includes the following points:

  1. Does the project store or process sensitive personal data (e.g. address, salary, gender) or private information (e.g. social networks, hobbies) about the employee?
  2. Does the project enable or expand performance and behaviour monitoring (e.g. cameras at workstations, reporting, time recording)?
  3. Does the project impact the employee’s working hours / work content and/or work organisation (e.g. work intensification, underload/overload, shifting of work, new or different tasks, monotony of tasks, transfer to another position if necessary)?
  4. Does the project impact occupational health and safety (e.g. working under full protection, heavy lifting, noise pollution, psychological risks)?
  5. Is there planned use of AI and/or algorithms that may impact employees?

Companies should provide the completed checklist to the project leader, manager, and relevant parties, such as worker councils, to ensure informed decision-making. If any points are affirmed during the project, the works council must be involved to protect co-determination rights, and define measures to ensure health and safety while mitigating AI-related risks.

Self-assessment Tool
To address the hidden impacts of AI algorithms on human health and safety rights while ensuring compliance with AI laws, we propose an AI-based self-assessment tool. This tool automates compliance checks and adapts to users’ knowledge, bridging gaps to enhance adherence to guidelines. Unlike manual, subjective compliance checks, this tool uses LLMs to generate customised checklists based on relevant standards. Figure 1 illustrates our vision for this tool, highlighting its potential to streamline ethical compliance in AI applications. Building on the goals discussed in this position paper, we will develop a regulation-aware AI model that not only leverages LLMs for compliance but also integrates relevance models to assess and address knowledge gaps among developers and users. By fine-tuning the AI model based on standards, regulations, and ethical guidelines while simultaneously bridging these knowledge gaps, the self-assessment tool will provide more personalised and effective guidance, ensuring that users of varying expertise can achieve a deeper understanding and more reliable compliance with ethical standards.

Figure 1: A vision of our self-assessment tool.
Figure 1: A vision of our self-assessment tool.

This paper is supported by the AIMS5.0 project. AIMS5.0 is supported by the Chips Joint Undertaking and its members, including the top-up funding by National Funding Authorities from involved countries under grant agreement no. 101112089.

Links: 
[L1] https://digital-strategy.ec.europa.eu/en/library/assessment-list-trustworthy-artificial-intelligence-altai-self-assessment 
[L2] https://artificialintelligenceact.eu/de/das-gesetz/ 
[L3] https://www.bundesanzeiger.de/pub/en/start/ 

References: 
[1] C. Huang, et al., “An overview of artificial intelligence ethics,” IEEE Transactions on Artificial Intelligence, vol. 4, no. 4, pp. 799–819, 2023.
[2] P. Moertl and N. Ebinger, “The development of ethical and trustworthy AI systems requires appropriate human-systems integration: A white paper,” InSecTT, White Paper, 2022. [Online]. https://www.insectt.eu/wp-content/uploads/2022/11/ Trustworthiness-Whitepaper-InSecTT-Format-v02-1-1.pdf

Please contact: 
Yasin Ghafourian, Research Studios Austria, Austria
This email address is being protected from spambots. You need JavaScript enabled to view it.

Markus Tauber, Research Studios Austria, Austria
This email address is being protected from spambots. You need JavaScript enabled to view it.

Next issue: January 2025
Special theme:
Large-Scale Data Analytics
Call for the next issue
Image ERCIM News 139
This issue in pdf

 

Image ERCIM News 139 epub
This issue in ePub format

Get the latest issue to your desktop
RSS Feed