AI governance

I4S 1/2026: Applied AI Ethics in the Workplace

I4S 1/2026: Applied AI Ethics in the Workplace

A shared responsibility — from radiology and speech therapy to assembly
AI ethics in the workplace is everyone’s responsibility. It requires accountability from companies as a whole and conscious action from individuals—whether developers or users, managers or employees. Key issues revolve around ethical AI skills and questions of governance and employee representation. How will the world of work change, from radiology and speech therapy to assembly and quality control?
Human-Centered AI in Companies with Employee Representation

Human-Centered AI in Companies with Employee Representation

Using the HUMAINE model for a company-specific works agreement
Alexander Ranft, Fabian Hoose ORCID Icon, Claudia Niewerth ORCID Icon, Mathias Preuß, Manfred Wannöffel ORCID Icon
The introduction of artificial intelligence (AI) in companies poses new challenges for regulation and co-determination. Binding requirements have been in force since the 2025 EU AI Act, which must be linked nationally with the Works Constitution Act (BetrVG). The regional competence center humAine has developed a model works agreement on AI (MBV KI) in accordance with Section 77 BetrVG, which strengthens co-determination rights in companies and implements European regulations in a practical way. Flanked by co-determination dialogues, the MBV KI enables company-specific adaptation for responsible and human-centered AI use. Using selected parts of the MBV KI as examples, this article shows how a framework works agreement on AI can be designed and discusses its transferability to companies without a works council. The MBV KI presented here contributes to the sustainable, socially secure design of the digital transformation.
Industry 4.0 Science | Volume 42 | Edition 1 | Pages 14-21 | DOI 10.30844/I4SE.26.1.14
Pre-Stages of GenAI Governance via Managerial Communication

Pre-Stages of GenAI Governance via Managerial Communication

Exploratory findings from SMEs in the Ruhr area
Niklas Obermann ORCID Icon, Uta Wilkens ORCID Icon, Antonia Weirich ORCID Icon, Matthias E. Cichon ORCID Icon, Jürgen Mazarov, Bernd Kuhlenkötter ORCID Icon
The governance of generative artificial intelligence (GenAI) usage is often described as a formalized reporting system. This neglects the early-stage mechanisms of coping with ethical challenges during the GenAI implementation period. Exploratory empirical findings from the Ruhr area reveal that managerial communicative practices serve as a substitute for missing institutional structures, particularly at an early stage of GenAI implementation in SMEs.
Industry 4.0 Science | Volume 42 | Edition 1 | Pages 6-13 | DOI 10.30844/I4SE.26.1.6
Mechanisms of GenAI Governance

Mechanisms of GenAI Governance

A case study on the responsible use of GenAI in organizations
Niklas Obermann ORCID Icon, Daniel Lupp ORCID Icon, Uta Wilkens ORCID Icon
Compared to traditional AI systems, generative artificial intelligence (GenAI) introduces user-dependent characteristics that create unique challenges for AI governance in organizations. These challenges are particularly tied to human factors, such as employee attitude, awareness, and skills, which are often neglected by existing governance frameworks. This qualitative case study examines how a manufacturing organization implemented GenAI governance mechanisms to foster the responsible use of this technology. The findings reveal that organizations should adopt a holistic approach, combining structural, procedural, and relational mechanisms to address employee-related aspects of GenAI governance. As a result, this study contributes to the growing field of GenAI governance and provides practical insights for its responsible use in organizations.
Industry 4.0 Science | Volume 41 | 2025 | Edition 5 | Pages 58-64 | DOI 10.30844/I4SE.25.5.58