
AI and Humantech Leadership: how Siemens is driving an inclusive and sustainable transformation
AI is becoming a true social infrastructure, capable of influencing choices, relationships, and work. How are you experiencing this shift at Siemens?
AB: AI is no longer just a technical support—it is an integral part of how we make decisions, communicate, and create value. At Siemens, we approach this transformation with a key principle: technology accelerates, but value remains human. Our evolution toward an Industrial AI Operating System therefore requires a deep organizational transformation. We are working along three main directions: speed, new ways of working, and the development of a new leadership model. AI can enhance productivity and quality, but only if it is supported by skills, awareness of its limits, and responsible use.
Does this transformation also impact the social and cultural structure of work?
AB: Absolutely. AI is not just a technological shift—it affects how work is distributed, creates new opportunities and skills, but also raises very concrete questions about the optimization of tasks and roles. That’s why this is as much a cultural issue as it is a training one. Our goal is to guide people through a process of professional evolution, carefully managing the organizational impact and ensuring that innovation remains inclusive and sustainable. This is a responsibility toward our people and society as a whole.
Does current regulation support or complicate this journey?
AB: The regulatory framework is still under construction and often struggles to keep pace with technological innovation. Many of the rules governing work organization are still based on outdated models and are not always able to support such rapid change. This gap makes the role of companies even more critical—they are called upon to anticipate protections, develop skills, and create sustainable environments even in the absence of fully updated frameworks.
Trust in automated decisions is a key issue. How do you build it?
AB: Trust is built through transparency. People need to know when they are interacting with an algorithm and what data is driving its decisions. In HR processes, we always retain human responsibility for critical decisions and conduct regular audits to identify potential biases. Our guiding principle is simple: AI must expand people’s autonomy and capabilities, not replace them. We constantly ask who might be disadvantaged by a fully automated process—this is the foundation of our governance.
How do themes like identity, accessibility, and power influence the AI systems you use?
AB: Algorithms reflect the society that creates them. That’s why we combine technological expertise, industrial know-how, and collaboration with external partners, always evaluating the impact on representation, accessibility, and inclusion. Every tool is also tested from the perspective of people with disabilities—an inaccessible interface is already a form of exclusion. We want AI to remove barriers, not create them.
Can machines feel empathy?
AB: AI can simulate it, but not truly feel it. Our role is to use technology to amplify human empathy. AI tools improve language understanding, generate subtitles, and simplify complex texts, making work more accessible to people with diverse needs. At Siemens, we increasingly talk about “humantech leaders”—people capable of leading hybrid teams and valuing diversity and human skills.
What is the concrete impact of AI on your organizational model?
AB: The impact is twofold: on one hand, AI accelerates our ability to predict, analyze, and solve problems; on the other, it requires new skills. That’s why we invest heavily in reskilling and AI-focused training. Our goal is to help people evolve within their roles, not be overwhelmed by change.
What role does accessibility play?
AB: A central one. AI can make processes and tools more accessible—from inclusive communication to multimodal interfaces, all the way to smart buildings that support mobility and safety. Accessibility is a marker of ethical maturity, not a technical detail. In our technology and leadership programs, we encourage continuous improvement—even a 1% daily change can transform culture.
AI also has an environmental cost. How do you address it?
AB: We monitor the consumption of our digital infrastructure, optimize data centers, and use AI itself to improve the efficiency of networks and buildings. For us, sustainability concerns the entire lifecycle of solutions—we do not sacrifice the long term for short-term gains.
What kind of leadership is needed for such a transformation?
FM: It requires leadership capable of connecting technology, people, and ecosystems. Leadership that is intelligent, visionary, humantech, and sustainable. But above all, leadership that values what AI will never replicate: empathy, responsibility, creativity, and the ability to imagine the future. At Siemens, this is what we call a “connector leader”—someone who not only understands technologies, but also interprets their human and social impact, fosters collaboration, and thinks long term.
Does this leadership have a strong ethical component?
FM: Absolutely. It means designing tools that are truly accessible and decisions that remain understandable, even when automated. An algorithm that excludes is not innovation—it is a failure of design. Then there is the deeply human side: active listening, conflict management, relationship care, a culture that embraces mistakes, and continuous learning. AI can support these capabilities, but not replace them. It is leadership’s role to protect them.
What is Siemens’ ambition?
FM: To lead an AI that includes, not excludes. That expands possibilities, not limits them. That always remains at the service of people. This is the goal guiding our leadership culture and our transformation—internally and across clients and ecosystems. This is how we envision the future of innovation.